Not only that, but he never generated the wealth in the first place. His savings were his, sure, but the rest of the money was essentially conned from the insurance company.
He did not make the world richer by sacrificing himself, he sacrificed himself to (dishonestly) reallocate resources.
I’d say support his actions iff you would support stealing to give to charity.
the money was essentially conned from the insurance company.
I don’t see it as “conned” (or perhaps I’m inferring some connotations that you don’t intend to imply by that word?): The man took “suicide-insurance”. That is to say, he signed a contract with the insurance company saying something along the lines of “I’ll pay you $X per month for the rest of my life. If I don’t commit suicide for 2 years, but then commit suicide after that, then you have to give me 1 million dollars.”
I’m sure the insurance company fully understood the terms of the contract (in fact, it is practically certain that it was the insurance company itself which wrote out the contract). The insurance company fully understood the terms of the deal and agreed to it. They employ actuaries and lawyers go over the draft of their contracts to ensure it means exactly what they think it means. No party was mislead or misunderstood the terms. So how is that a con?
I agree, I don’t think it’s a con. It only seems like a con because you are betting with the insurance company about the contents of your brain and most people naturally assume that they understand the contents of their own brain better than some outside agency.
However, I think that assumption is pretty clearly false. It seems that institutions have the benefit of a lot of past experience and can use that experience to understand people better (and predict their behavior better) than they understand or could predict themselves.
Most people could acquire much more near term wealth via insurance than via work but could not acquire more near term wealth via theft (expected value) than via work.
So? Not revealing info != dishonesty. Unless he signed a contract that stated that he had no intent to commit suicide, I don’t think he ever lied.
Let’s say I am a proficient at counting cards while playing blackjack. I go to the casino to gamble and walk away richer—consistently. This case is actually very similar to the insurance one, in that in both cases I am making a bet with some sort of large organization, and I know more about the nature of the bet than the large organization does.
Anyway, is the card counter dishonest? And if not, how is the man who commits suicide different?
Optimizing your decisions so that other people will form less accurate beliefs is dishonesty. Making literally false statements you expect other people to believe is just a special case of this.
If you decide not to reveal info because you predict that info will enable another person to accurately predict your behavior and decline to enter an agreement with you, you are being dishonest.
Hm, I wrote that comment two years ago. My new view is that it’s not much worth arguing over the definition of “dishonesty” so figuring out whether the guy is “dishonest” or not is just a word game—we should figure out if others having correct beliefs is a terminal value to us, and if so, how it trades off against other terminal values. (Or perhaps individually not acting in ways that give others incorrect beliefs is a terminal value.)
As a consequentialist, I mostly say the ends justify the means. I am a little cautious due to the issues Eliezer discusses in this post, but I don’t think I’m as cautious as Eliezer is—I have a fair amount of confidence in my ability to notice when my brain is going in to a failure mode like he describes.
I’m not entirely comfortable with this line of thinking. Drawing a distinction between withholding relevant information and providing false information is such a common feature of moral systems that I can’t help but think any heuristic that eliminates the distinction is missing something important. It all has to reduce to normality, after all.
That said, biases do exist, and if we can come up with a plausible mechanism by which it’d be psychologically important without being consequentially important then I think I’d be happier with the conclusion. It might just come down to how difficult it is to prove.
Drawing a distinction between withholding relevant information and providing false information is such a common feature of moral systems that I can’t help but think any heuristic that eliminates the distinction is missing something important.
The pragmatic distinction is that lies are easier to catch (or make common knowledge), so the lying must be done more carefully than mere withholding of relevant information. Seeing withholding of information as a moral right is a self-delusion part of normal hypocritic reasoning. Breaking it will make you a less effective hypocrite, all else equal.
Seeing withholding of information as a moral right is a self-delusion part of normal hypocritic reasoning.
I assert that moral right overtly, embracing all relevant underlying connotations. I am in no way deluding myself regarding the basis for that assertion and it is not relevant to any hypocrisy that I may have.
You haven’t unpacked anything, black box disagreements don’t particularly help to change anyone’s mind. We are probably even talking about different things (the idea of “moral right” seems confused to me more generally, maybe you have a better interpretation).
You haven’t unpacked anything, black box disagreements
It seems to be your black box. I just claim the right to withhold information—and am not thereby deluded or hypocritical. (I am deluded and hypocritical in completely different ways.)
the idea of “moral right” seems confused to me more generally, maybe you have a better interpretation
It isn’t language I use by preference, even if I am occasionally willing to go along with it when others are using it. I presented my rejection as a personal assertion for that reason. While I don’t personally place much stock in objectively phrased morality I can certainly go along with the game of claiming social rights.
I just claim the right to withhold information—and am not thereby deluded or hypocritical.
Should people in general withhold relevant information more or less? There is only hypocrisy here (bad conduct given a commons problem) if less is better and you act in a way that promotes more, and self-delusion if you also believe this behavior good.
Should people in general withhold relevant information more or less? There is only hypocrisy here (bad conduct given a commons problem) if less is better and you act in a way that promotes more, and self-delusion if you also believe this behavior good.
It is no coincidence that one of the most effective solutions to a commons problem is the assignment of individual rights.
People in general should not be obliged to share all relevant information with me, nor I with them. In the same way they should not be obliged to give me their stuff whenever I want it. Because that kind of social structure is unstable and has a predictable failure mode of extreme hypocrisy.
No, my asserted right, if adhered to consistently (and I certainly encourage others to assert the same right for themselves) reduces the need for hypocrisy. This is in contrast to the advocation of superficially ‘nice’ sounding social rules to be supported by penalty of shaming and labeling—that is where the self delusional lies. I prefer to support conventions that might actually work and that don’t unduly penalize those that abide by them.
I’m not entirely comfortable with this line of thinking. Drawing a distinction between withholding relevant information and providing false information is such a common feature of moral systems that I can’t help but think any heuristic that eliminates the distinction is missing something important.
I agree that a distinction should be drawn but I disagree about where. I think the morally important distinction is not between withholding information and providing false information, but why and in what context you are misleading the other person. If he’s trying to violate your rights, for example, or if he’s prying into something that’s none of his business, then lie away. If you are trying to screw him over by misleading him, then you are getting into a moral gray area, or possibly worse.
Nah, that’s just standard deontological vs. consequential thinking. If dishonesty is approached in consequential terms then it becomes just another act of (fully generalized) aggression—something you don’t want to do to someone except in self-defense or unless you’d also slash their tires, to borrow an Eliezer phrase, but not something that’s forbidden in all cases. It only becomes problematic in general if there’s a deontological prohibition against it.
Looking at it that way doesn’t clarify the distinction between lying by commission vs. lying by omission, though. There’s something else going on there.
I don’t know what you just said. For example you wrote: “that’s just standard deontological vs. consequential thinking.” What does that mean? Does that mean that I have in a single comment articulated both deontological and consequentialist thinking and set them at odds, simultaneously arguing both sides? Or are you saying I articulated one of these? If so, which one?
For my part, I don’t think my comment takes either side. Whether your view is deontological or consequentialist, you should agree on the basics, which includes that you have a right to self-defense. That is the context I am talking about in deciding whether the deception is moral. So I am not saying anything consequentialist here, if that’s your point. A deontologisr should agree on the right to self defense, unless his moral axioms are badly chosen.
I think your comment describes a consequentialist take on the subject of dishonesty and implicitly argues that the deontological version is incorrect. I agree with that conclusion, but I don’t think it says anything unusual on the subject of dishonesty in particular.
In this context, and as a heuristic rather than a defining feature. Most systems of deontological ethics I’ve ever heard of don’t allow for lying in self-defense; it’s possible in principle to come up with one that does, but I’ve never seen a well-defined one in the wild.
I was really looking more at the structure of your comment than at the specific example of self-defense, though: you described some examples of dishonesty aimed at minimizing harm and contrasted them with unambiguously negative-sum examples, which is a style of argument I associate (pretty strongly) with a pragmatic/consequential approach to ethics. My mistake if that’s a bad assumption.
Most systems of deontological ethics I’ve ever heard of don’t allow for lying in self-defense
It’s no different in principle from killing in self defense. If these systems don’t allow lying in self defense, then they must not allow self defense at all, because lying in self defense is a trivial application of the general right to self defense.
Anyway, the fact that my point triggered a memory in you of a consequentialist versus deontological dispute does not change my point. If we delete everything you said about deontologists versus consequentialists, have you actually said something to deflect my point?
It’s no different in principle from killing in self defense. If these systems don’t allow lying in self defense, then they must not allow self defense at all, because lying in self defense is a trivial application of the general right to self defense.
I don’t think that follows. These are deontologists we are talking about. They are in the business of making up a set of arbitrary rules and saying that’s what people should do. Remembering to include a rule about being allowed to defend yourself physically doesn’t mean they will remember to also allow people to lie in self defense.
We can’t assume deontologists are sane or reasonable. They are humans talking about morality!
These are deontologists we are talking about. They are in the business of making up a set of arbitrary rules and saying that’s what people should do. Remembering to include a rule about being allowed to defend yourself physically doesn’t mean they will remember to also allow people to lie in self defense.
I don’t think it was. Just a fairly simple and non-technical description. A similar simplified description of consequentialist moralizing would not read all that much differently.
The key sentence in the comment in terms of conveying perspective was “They are humans talking about morality!” I actually suggest the description errs on the side of a positive idealized spin. Morality just isn’t that nice.
That is actually how deontologists work, though. It’s not a caricature when the people you’re talking about say this is okay because it’s Right and this isn’t because it’s Wrong and when you ask them why some things are Right and other things are Wrong, they try to conjure up the inherent Rightness and Wrongness of actions from nowhere. Seriously!
I have discussed this point with a few people, and the two who self-identified as non-religious deontologists explicitly assigned objective rightness and wrongness to actions.
“Murder was wrong before there were human beings, and murder will be wrong after there are human beings. Murder would be wrong even if the universe didn’t contain any human beings”.
The kind of people who are using this word “deontologist” to refer to themselves actually are doing this.
I use the word “deontologist” to refer to myself. I do assign objective rightness and wrongness to things (technically intentions, not actions, though I will talk loosely of actions). There is no meaningful sense in which murder could be wrong in a universe that did not contain any people (humans per se are not called for) because there would be no moral agents to commit wrong acts or be the victims of rights violations. In such an uninhabited universe, it would remain counterfactually wrong for any people to murder any other people if people were to come into existence. (“Counterfactually wrong” in much the same way that it would be wrong for me to steal my roommate’s diamond tiara, if she had a diamond tiara, but since she doesn’t it’s a pointless statement.)
“Deontologist” and “Moral Objectivist” are not synonyms. Most deontologists are nonetheless objectivists. The reverse does not hold since, for instance, consequentiailists are not deontologists but are subjectivists.
It is sill a caricature to say deontologists conjure up Right and Wrong out of nowhere. The most famous deontologist was probably Kant, who argued elaborately for his claims.
The persistent problem in these discussions is the assumption that moral objectivism can only work like a quasi-empiricism, detecting some special domain of ethical facts. However, nobody seriously argues for it that way.
As noted by Alicorn. moral laws can apply counterfactually just as easily as natural laws.
The kind of people who are using this word “deontologist” to refer to themselves actually are doing this.
That is certainly true but for my part I attribute that to them being humans engaging in moralizing, not their deonotology per se. The the ‘objective rightness of their morals’ thing can just as well be applied to consequentialist values.
If these systems don’t allow lying in self defense, then they must not allow self defense at all, because lying in self defense is a trivial application of the general right to self defense.
‘Rights’ are most usefully thought of in political contexts; ethically, the question is not so much “Do I have a right to self-defense?” as “Should I defend myself?”.
For Kant (the principal deontologist), lying is inherently self-defeating. The point of lying is to make someone believe what you say; but, if everyone would lie in that circumstance, then no one would believe what you say. And so lying cannot be universalized for any circumstance, and so is disallowed by the criterion of universalizability.
if everyone would lie in that circumstance, then no one would believe what you say.
This is only true if the other party is aware of the circumstance. If they are not—if they are already deceived about the circumstance—then if everyone lied in the circumstance, the other party would still be deceived. Therefore lying is not self-defeating.
I was just pointing out how Kant might justify self-defense but not lying in self-defense, in summary. If you’d like to disagree with Kant, I suggest doing so against more than an off-the-cuff summary.
Though I don’t recommend bothering with it, as his ethics is based on his metaphysics and his metaphysics is false.
I don’t disagree with your point. I just don’t see it as relevant to mine.
There are any number of ways we can slice up a moral question: initiation of harm’s one, protected categories like the “not any of your business” you mentioned are another, and my omission/commission distinction is a third. Bringing up one doesn’t invalidate another.
But I think lying by omission can indeed be very bad, if you are using the lie of omission to defraud the other party, and that seems to be what is occurring in the scenario in question.
Generally speaking, we are not obligated to inform random people walking down the street of the facts. That would be active assistance, which we do not owe to random strangers. In contrast, telling random strangers active lies puts them at risk, because if they act on those lies they may be harmed. So there you have a moral distinction between failing to inform people of the truth, and informing them of lies. But if you are already interacting with someone, for example if you are buying life insurance from them with the intention of killing yourself, then they are no longer random strangers, and your obligations to them increase.
I am not arguing that lying by omission cannot be bad. Neither am I arguing for a specific policy toward lies of omission. I am arguing that folk ethics sees them as consistently less bad than lies of commission with the same consequences, and that a general discussion of the ethics of honesty ought to reflect this either by including reasons to do the same or by accounting for non-ethical reasons for the folk distinction. Otherwise you’ve got a theory that doesn’t match the empirical data.
Optimizing your decisions so that other people will form less accurate beliefs is dishonesty. Making literally false statements you expect other people to believe is just a special case of this.
Only if dogs have five legs if you call a tail a leg.
Optimising your decisions so that other people will form less accurate beliefs can only be legitimately construed as dishonest if you say or otherwise communicate that it is your intention to produce accurate beliefs.
Now I’ve thought more about it, if there’s nothing in the agreement about suicide being intended at the time of application, then I think you’re right.
I think of insurance policies as having clauses in about revealing any information that might affect the likelihood of a claim, but I can understand why that might not apply to life insurance policies.
Not only that, but he never generated the wealth in the first place. His savings were his, sure, but the rest of the money was essentially conned from the insurance company.
He did not make the world richer by sacrificing himself, he sacrificed himself to (dishonestly) reallocate resources.
I’d say support his actions iff you would support stealing to give to charity.
I don’t see it as “conned” (or perhaps I’m inferring some connotations that you don’t intend to imply by that word?): The man took “suicide-insurance”. That is to say, he signed a contract with the insurance company saying something along the lines of “I’ll pay you $X per month for the rest of my life. If I don’t commit suicide for 2 years, but then commit suicide after that, then you have to give me 1 million dollars.”
I’m sure the insurance company fully understood the terms of the contract (in fact, it is practically certain that it was the insurance company itself which wrote out the contract). The insurance company fully understood the terms of the deal and agreed to it. They employ actuaries and lawyers go over the draft of their contracts to ensure it means exactly what they think it means. No party was mislead or misunderstood the terms. So how is that a con?
I agree, I don’t think it’s a con. It only seems like a con because you are betting with the insurance company about the contents of your brain and most people naturally assume that they understand the contents of their own brain better than some outside agency.
However, I think that assumption is pretty clearly false. It seems that institutions have the benefit of a lot of past experience and can use that experience to understand people better (and predict their behavior better) than they understand or could predict themselves.
Most people could acquire much more near term wealth via insurance than via work but could not acquire more near term wealth via theft (expected value) than via work.
How was he dishonest?
Because he didn’t disclose to the insurance company that he was planning to commit suicide at the time he took out the policy(!)
So? Not revealing info != dishonesty. Unless he signed a contract that stated that he had no intent to commit suicide, I don’t think he ever lied.
Let’s say I am a proficient at counting cards while playing blackjack. I go to the casino to gamble and walk away richer—consistently. This case is actually very similar to the insurance one, in that in both cases I am making a bet with some sort of large organization, and I know more about the nature of the bet than the large organization does.
Anyway, is the card counter dishonest? And if not, how is the man who commits suicide different?
Optimizing your decisions so that other people will form less accurate beliefs is dishonesty. Making literally false statements you expect other people to believe is just a special case of this.
If you decide not to reveal info because you predict that info will enable another person to accurately predict your behavior and decline to enter an agreement with you, you are being dishonest.
Hm, I wrote that comment two years ago. My new view is that it’s not much worth arguing over the definition of “dishonesty” so figuring out whether the guy is “dishonest” or not is just a word game—we should figure out if others having correct beliefs is a terminal value to us, and if so, how it trades off against other terminal values. (Or perhaps individually not acting in ways that give others incorrect beliefs is a terminal value.)
As a consequentialist, I mostly say the ends justify the means. I am a little cautious due to the issues Eliezer discusses in this post, but I don’t think I’m as cautious as Eliezer is—I have a fair amount of confidence in my ability to notice when my brain is going in to a failure mode like he describes.
I’m not entirely comfortable with this line of thinking. Drawing a distinction between withholding relevant information and providing false information is such a common feature of moral systems that I can’t help but think any heuristic that eliminates the distinction is missing something important. It all has to reduce to normality, after all.
That said, biases do exist, and if we can come up with a plausible mechanism by which it’d be psychologically important without being consequentially important then I think I’d be happier with the conclusion. It might just come down to how difficult it is to prove.
The pragmatic distinction is that lies are easier to catch (or make common knowledge), so the lying must be done more carefully than mere withholding of relevant information. Seeing withholding of information as a moral right is a self-delusion part of normal hypocritic reasoning. Breaking it will make you a less effective hypocrite, all else equal.
I assert that moral right overtly, embracing all relevant underlying connotations. I am in no way deluding myself regarding the basis for that assertion and it is not relevant to any hypocrisy that I may have.
You haven’t unpacked anything, black box disagreements don’t particularly help to change anyone’s mind. We are probably even talking about different things (the idea of “moral right” seems confused to me more generally, maybe you have a better interpretation).
It seems to be your black box. I just claim the right to withhold information—and am not thereby deluded or hypocritical. (I am deluded and hypocritical in completely different ways.)
It isn’t language I use by preference, even if I am occasionally willing to go along with it when others are using it. I presented my rejection as a personal assertion for that reason. While I don’t personally place much stock in objectively phrased morality I can certainly go along with the game of claiming social rights.
Should people in general withhold relevant information more or less? There is only hypocrisy here (bad conduct given a commons problem) if less is better and you act in a way that promotes more, and self-delusion if you also believe this behavior good.
It is no coincidence that one of the most effective solutions to a commons problem is the assignment of individual rights.
People in general should not be obliged to share all relevant information with me, nor I with them. In the same way they should not be obliged to give me their stuff whenever I want it. Because that kind of social structure is unstable and has a predictable failure mode of extreme hypocrisy.
No, my asserted right, if adhered to consistently (and I certainly encourage others to assert the same right for themselves) reduces the need for hypocrisy. This is in contrast to the advocation of superficially ‘nice’ sounding social rules to be supported by penalty of shaming and labeling—that is where the self delusional lies. I prefer to support conventions that might actually work and that don’t unduly penalize those that abide by them.
Agreed that it’s practical.
I agree that a distinction should be drawn but I disagree about where. I think the morally important distinction is not between withholding information and providing false information, but why and in what context you are misleading the other person. If he’s trying to violate your rights, for example, or if he’s prying into something that’s none of his business, then lie away. If you are trying to screw him over by misleading him, then you are getting into a moral gray area, or possibly worse.
Nah, that’s just standard deontological vs. consequential thinking. If dishonesty is approached in consequential terms then it becomes just another act of (fully generalized) aggression—something you don’t want to do to someone except in self-defense or unless you’d also slash their tires, to borrow an Eliezer phrase, but not something that’s forbidden in all cases. It only becomes problematic in general if there’s a deontological prohibition against it.
Looking at it that way doesn’t clarify the distinction between lying by commission vs. lying by omission, though. There’s something else going on there.
I don’t know what you just said. For example you wrote: “that’s just standard deontological vs. consequential thinking.” What does that mean? Does that mean that I have in a single comment articulated both deontological and consequentialist thinking and set them at odds, simultaneously arguing both sides? Or are you saying I articulated one of these? If so, which one?
For my part, I don’t think my comment takes either side. Whether your view is deontological or consequentialist, you should agree on the basics, which includes that you have a right to self-defense. That is the context I am talking about in deciding whether the deception is moral. So I am not saying anything consequentialist here, if that’s your point. A deontologisr should agree on the right to self defense, unless his moral axioms are badly chosen.
I think your comment describes a consequentialist take on the subject of dishonesty and implicitly argues that the deontological version is incorrect. I agree with that conclusion, but I don’t think it says anything unusual on the subject of dishonesty in particular.
You think the right to self defense is consequentialist? That’s the first I’ve heard about that.
In this context, and as a heuristic rather than a defining feature. Most systems of deontological ethics I’ve ever heard of don’t allow for lying in self-defense; it’s possible in principle to come up with one that does, but I’ve never seen a well-defined one in the wild.
I was really looking more at the structure of your comment than at the specific example of self-defense, though: you described some examples of dishonesty aimed at minimizing harm and contrasted them with unambiguously negative-sum examples, which is a style of argument I associate (pretty strongly) with a pragmatic/consequential approach to ethics. My mistake if that’s a bad assumption.
It’s no different in principle from killing in self defense. If these systems don’t allow lying in self defense, then they must not allow self defense at all, because lying in self defense is a trivial application of the general right to self defense.
Anyway, the fact that my point triggered a memory in you of a consequentialist versus deontological dispute does not change my point. If we delete everything you said about deontologists versus consequentialists, have you actually said something to deflect my point?
I don’t think that follows. These are deontologists we are talking about. They are in the business of making up a set of arbitrary rules and saying that’s what people should do. Remembering to include a rule about being allowed to defend yourself physically doesn’t mean they will remember to also allow people to lie in self defense.
We can’t assume deontologists are sane or reasonable. They are humans talking about morality!
Well, that’ wasn’t a caricature...!
I don’t think it was. Just a fairly simple and non-technical description. A similar simplified description of consequentialist moralizing would not read all that much differently.
The key sentence in the comment in terms of conveying perspective was “They are humans talking about morality!” I actually suggest the description errs on the side of a positive idealized spin. Morality just isn’t that nice.
That is actually how deontologists work, though. It’s not a caricature when the people you’re talking about say this is okay because it’s Right and this isn’t because it’s Wrong and when you ask them why some things are Right and other things are Wrong, they try to conjure up the inherent Rightness and Wrongness of actions from nowhere. Seriously!
No.
I have discussed this point with a few people, and the two who self-identified as non-religious deontologists explicitly assigned objective rightness and wrongness to actions.
The kind of people who are using this word “deontologist” to refer to themselves actually are doing this.
I use the word “deontologist” to refer to myself. I do assign objective rightness and wrongness to things (technically intentions, not actions, though I will talk loosely of actions). There is no meaningful sense in which murder could be wrong in a universe that did not contain any people (humans per se are not called for) because there would be no moral agents to commit wrong acts or be the victims of rights violations. In such an uninhabited universe, it would remain counterfactually wrong for any people to murder any other people if people were to come into existence. (“Counterfactually wrong” in much the same way that it would be wrong for me to steal my roommate’s diamond tiara, if she had a diamond tiara, but since she doesn’t it’s a pointless statement.)
“Deontologist” and “Moral Objectivist” are not synonyms. Most deontologists are nonetheless objectivists. The reverse does not hold since, for instance, consequentiailists are not deontologists but are subjectivists.
It is sill a caricature to say deontologists conjure up Right and Wrong out of nowhere. The most famous deontologist was probably Kant, who argued elaborately for his claims.
The persistent problem in these discussions is the assumption that moral objectivism can only work like a quasi-empiricism, detecting some special domain of ethical facts. However, nobody seriously argues for it that way.
As noted by Alicorn. moral laws can apply counterfactually just as easily as natural laws.
That is certainly true but for my part I attribute that to them being humans engaging in moralizing, not their deonotology per se. The the ‘objective rightness of their morals’ thing can just as well be applied to consequentialist values.
Right; I trusted them when they said it was deontology that gave them absolute values—but of course, a moralizing human would say that.
‘Rights’ are most usefully thought of in political contexts; ethically, the question is not so much “Do I have a right to self-defense?” as “Should I defend myself?”.
For Kant (the principal deontologist), lying is inherently self-defeating. The point of lying is to make someone believe what you say; but, if everyone would lie in that circumstance, then no one would believe what you say. And so lying cannot be universalized for any circumstance, and so is disallowed by the criterion of universalizability.
This is only true if the other party is aware of the circumstance. If they are not—if they are already deceived about the circumstance—then if everyone lied in the circumstance, the other party would still be deceived. Therefore lying is not self-defeating.
I was just pointing out how Kant might justify self-defense but not lying in self-defense, in summary. If you’d like to disagree with Kant, I suggest doing so against more than an off-the-cuff summary.
Though I don’t recommend bothering with it, as his ethics is based on his metaphysics and his metaphysics is false.
Understood.
I don’t disagree with your point. I just don’t see it as relevant to mine.
There are any number of ways we can slice up a moral question: initiation of harm’s one, protected categories like the “not any of your business” you mentioned are another, and my omission/commission distinction is a third. Bringing up one doesn’t invalidate another.
But I think lying by omission can indeed be very bad, if you are using the lie of omission to defraud the other party, and that seems to be what is occurring in the scenario in question.
Generally speaking, we are not obligated to inform random people walking down the street of the facts. That would be active assistance, which we do not owe to random strangers. In contrast, telling random strangers active lies puts them at risk, because if they act on those lies they may be harmed. So there you have a moral distinction between failing to inform people of the truth, and informing them of lies. But if you are already interacting with someone, for example if you are buying life insurance from them with the intention of killing yourself, then they are no longer random strangers, and your obligations to them increase.
I am not arguing that lying by omission cannot be bad. Neither am I arguing for a specific policy toward lies of omission. I am arguing that folk ethics sees them as consistently less bad than lies of commission with the same consequences, and that a general discussion of the ethics of honesty ought to reflect this either by including reasons to do the same or by accounting for non-ethical reasons for the folk distinction. Otherwise you’ve got a theory that doesn’t match the empirical data.
That is how I feel.
Only if dogs have five legs if you call a tail a leg.
Optimising your decisions so that other people will form less accurate beliefs can only be legitimately construed as dishonest if you say or otherwise communicate that it is your intention to produce accurate beliefs.
Now I’ve thought more about it, if there’s nothing in the agreement about suicide being intended at the time of application, then I think you’re right.
I think of insurance policies as having clauses in about revealing any information that might affect the likelihood of a claim, but I can understand why that might not apply to life insurance policies.
Because he didn’t disclose to the insurance company that he was planning to commit suicide at the time he took out the policy(!)