As an old quote from DanielLC says, consequentialism is “the belief that doing the right thing makes the world a better place”. I now present some finger exercises on the topic:
Is it okay to cheat on your spouse as long as (s)he never knows?
If you have already cheated and managed to conceal it perfectly, is it right to stay silent?
If your spouse asks you to give a solemn promise to never cheat, and you know you will cheat perfectly discreetly, is it right to give the promise to make them happy?
If your wife loves you, but you only stay in the marriage because of the child, is it right to assure the wife you still love her?
If your husband loves you, but doesn’t know the child isn’t his, is it right to stay silent?
The people from #4 and #5 are actually married to each other. They seem to be caught in an uncomfortable equilibrium of lies. Would they have been better off as deontologists?
While you’re thinking about these puzzles, be extra careful to not write the bottom line in advance and shoehorn the “right” conclusion into a consequentialist frame. For example, eliminating lies doesn’t “make the world a better place” unless it actually makes people happier; claiming so is just concealed deontologism.
For example, eliminating lies doesn’t “make the world a better place” unless it actually makes people happier; claiming so is just concealed deontologism.
Just picking nits. Consequentialism =/= maximizing happiness. (The latter is a case of the former). So one could be a consequentialist and place a high value and not lying. In fact, the answers to all of your questions depend on the values one holds.
For example, eliminating lies doesn’t “make the world a better place” unless it actually makes people happier; claiming so is just concealed deontologism.
I disagree. Not lying or not being lied to might well be a terminal value, why not? You that lies or doesn’t lie is part of the world. A person may dislike being lied to, value the world where such lying occurs less, irrespective of whether they know of said lying. (Correspondingly, the world becomes a better place even if you eliminate some lying without anyone knowing about that, so nobody becomes happier in the sense of actually experiencing different emotions, assuming nothing else that matters changes as well.)
Of course, if you can only eliminate a specific case of lying by on the net making the outcome even worse for other reasons, it shouldn’t be done (and some of your examples may qualify for that).
A person may dislike being lied to, value the world where such lying occurs less, irrespective of whether they know of said lying.
In my opinion, this is a lawyer’s attempt to masquerade deontologism as consequentialism. You can, of course, reformulate the deontologist rule “never lie” as a consequentialist “I assign an extremely high disutility to situations where I lie”. In the same way you can put consequentialist preferences as a deontoligst rule “at any case, do whatever maximises your utility”. But doing that, the point of the distinction between the two ethical systems is lost.
My comment argues about the relationship of concepts “make the world a better place” and “makes people happier”. cousin_it’s statement:
For example, eliminating lies doesn’t “make the world a better place” unless it actually makes people happier; claiming so is just concealed deontologism.
I saw this as an argument, in countrapositive form for this: if we take a consequentialist outlook, then “make the world a better place” should be the same as “makes people happier”. However, it’s against the spirit of consequentialist outlook, in that it privileges “happy people” and disregards other aspects of value. Taking “happy people” as a value through deontological lens would be more appropriate, but it’s not what was being said.
Let’s carry this train of thought to its logical extreme. Imagine two worlds, World 1 and World 2. They are in exactly the same state at present, but their past histories differ: in World 1, person A lied to person B and then promptly forgot about it. In World 2 this didn’t happen. You seem to be saying that a sufficiently savvy consequentialist will value one of those worlds higher than the other. I think this is a very extreme position for a “consequentialist” to take, and the word “deontologism” would fit it way better.
IMO, a “proper” consequentialist should care about consequences they can (in principle, someday) see, and shouldn’t care about something they can never ever receive information about. If we don’t make this distinction or something similar to it, there’s no theoretical difference between deontologism and consequentialism—each one can be implemented perfectly on top of the other—and this whole discussion is pointless, and likewise is a good chunk of LW. Is that the position you take?
That the consequences are distinct according to one’s ontological model is distinct from a given agent being able to trace these consequences. What if the fact about the lie being present or not was encrypted using a one-way injective function, with the original forgotten, but the cypher retained? In principle, you can figure which is which (decipher), but not in practice for many years to come. Does your inability to decipher this difference change the fact of one of these worlds being better? What if you are not given a formal cipher, but how a butterfly flaps its wings 100 year later can be traced back to the event of lying/not lying through the laws of physics? What if the same can only be said of a record in an obscure historical text from 500 years ago, so that the event of lying was actually indirectly predicted/caused far in advance, and can in principle be inferred from that evidence?
The condition for the difference to be observable in principle is much weaker than you seem to imply. And since ability to make logical conclusions from the data doesn’t seem like the sort of thing that influences the actual moral value of the world, we might as well agree that you don’t need to distinguish them at all, although it doesn’t make much sense to introduce the distinction in value if no potential third-party beneficiary can distinguish as well (this would be just taking a quotient of ontology on the potential observation/action equivalence classes, in other words using ontological boxing of syntactic preference).
The condition for the difference to be observable in principle is much weaker than you seem to imply.
It might be, but whether or not it is seems to depend on, among other things, how much randomness there is in the laws of physics. And the minutiae of micro-physics also don’t seem like the kind of thing that can influence the moral value of the world, assuming that the psychological states of all actors in the world are essentially indifferent to these minutiae.
Can’t we resolve this problem by saying that the moral value attaches to a history of the world rather than (say) a state of the world, or the deductive closure of the information available to an agent? Then we can be consistent with the letter if not the spirit of consequentialism by stipulating that a world history containing a forgotten lie gets lower value than an otherwise macroscopically identical world history not containing it. (Is this already your view, in fact?)
Now to consider cousin_it’s idea that a “proper” consequentialist only cares about consequences that can be seen:
Even if all information about the lie is rapidly obliterated, and cannot be recovered later, it’s still true that the lie and its immediate consequences are seen by the person telling it, so we might regard this as being ‘sufficient’ for a proper consequentialist to care about it. But if we don’t, and all that matters is the indefinite future, then don’t we face the problem that “in the long term we’re all dead”? OK, perhaps some of us think that rule will eventually cease to apply, but for argument’s sake, if we knew with certainty that all life would be extinguished, say, 1000 years from now (and that all traces of whether people lived well or badly would subsequently be obliterated) we’d want our ethical theory to be more robust than to say “Do whatever you like—nothing matters any more.”
This is correct, and I was wrong. But your last sentence sounds weird. You seem to be saying that it’s not okay for me to lie even if I can’t get caught, because then I’d be the “third-party beneficiary”, but somehow it’s okay to lie and then erase my memory of lying. Is that right?
You seem to be saying that it’s not okay for me to lie even if I can’t get caught, because then I’d be the “third-party beneficiary”
Right. “Third-party beneficiary” can be seen as a generalized action, where the action is to produce an agent, or cause a behavior of an existing agent, that works towards optimizing your value.
but it’s somehow okay to lie and then erase my memory of lying. Is that right?
It’s not okay, in the sense that if you introduce the concept of you-that-decided-to-lie, existing in the past but not in present, then you also have to morally color this ontological distinction, and the natural way to do that would be to label the lying option worse. The you-that-decided is the third-party “beneficiary” in that case, that distinguished the states of the world containing lying and not-lying.
But it probably doesn’t make sense for you to have that concept in your ontology if the states of the world that contained you-lying can’t be in principle (in the strong sense described in the previous comment) distinguished from the ones that don’t. You can even introduce ontological models for this case that, say, mark past-you-lying as better than past-you-not-lying and lead to exactly the same decisions, but that would be a non-standard model ;-)
I can’t believe you took the exact cop-out I warned you against. Use more imagination next time! Here, let me make the problem a little harder for you: restrict your attention to consequentialists whose terminal values have to be observable.
I can’t believe you took the exact cop-out I warned you against.
Not surprisingly, as I was arguing with that warning, and cited it in the comment.
restrict your attention to consequentialists whose terminal values have to be observable.
What does this mean? Consequentialist values are about the world, not about observations (but your words don’t seem to fit to disagreement with this position, thus the ‘what does this mean?’). Consequentialist notion of values allows a third party to act for your benefit, in which case you don’t need to know what the third party needs to know in order to implement those values. The third party knows you could be lied to or not, and tries to make it so that you are not lied to, but you don’t need to know about these options in order to benefit.
It is a common failure of moral analysis (invented by deontologists undoubtedly) that they assume idealized moral situation. Proper consequentialism deals with the real world, not this fantasy.
#1/#2/#3 - “never knows” fails far too often, so you need to include a very large chance of failure in your analysis.
#4 - it’s pretty safe to make stuff like that up
#5 - in the past, undoubtedly yes; in the future this will be nearly certain to leak with everyone undergoing routine genetic testing for medical purposes, so no. (future is relevant because situation will last decades)
#6 - consequentialism assumes probabilistic analysis (% that child is not yours, % chance that husband is making stuff up) - and you weight costs and benefits of different situations proportionally to their likelihood. Here they are in unlikely situation that consequentialism doesn’t weight highly. They might be better off with some other value system, but only at cost of being worse off in more likely situations.
You seem to make the error here that you rightly criticize. Your feelings have involuntary, detectable consequences; lying about them can have a real personal cost.
It is my estimate that this leakage is very low, compared to other examples. I’m not claiming it doesn’t exist, and for some people it might conceivably be much higher.
Is it okay to cheat on your spouse as long as (s)he never knows?
Is this actually possible? Imagine that 10% of people cheat on their spouses when faced with a situation ‘similar’ to yours. Then the spouses can ‘put themselves in your place’ and think “Gee, there’s about a 10% chance that I’d now be cheating on myself. I wonder if this means my husband/wife is cheating on me?”
So if you are inclined to cheat then spouses are inclined to be suspicious. Even if the suspicion doesn’t correlate with the cheating, the net effect is to drive utility down.
I think similar reasoning can be applied to the other cases.
(Of course, this is a very “UDT-style” way of thinking—but then UDT does remind me of Kant’s categorical imperative, and of course Kant is the arch-deontologist.)
Your reasoning goes above and beyond UDT: it says you must always cooperate in the Prisoner’s Dilemma to avoid “driving net utility down”. I’m pretty sure you made a mistake somewhere.
We’re talking about ethics rather than decision theory. If you want to apply the latter to the former then it makes perfect sense to take the attitude that “One util has the same ethical value, whoever that util belongs to. Therefore, we’re going to try to maximize ‘total utility’ (whatever sense one can make of that concept)”.
I think UDT does (or may do, depending on how you set it up) co-operate in a one-shot Prisoner’s Dilemma. (However, if you imagine a different game “The Torture Game” where you’re a sadist who gets 1 util for torturing, and inflicting −100 utils, then of course UDT cannot prevent you from torturing. So I’m certainly not arguing that UDT, exactly as it is, constitutes an ethical panacea.)
The connection between “The Torture Game” and Prisoner’s Dilemma is actually very close: Prisoner’s Dilemma is just A and B simultaneously playing the torture game with A as torturer and B as victim and vice versa, not able to communicate to each other whether they’ve chosen to torture until both have committed themselves one way or the other.
I’ve observed that UDT happily commits torture when playing The Torture Game, and (imo) being able to co-operate in a one-shot Prisoner’s Dilemma should be seen as one of the ambitions of UDT (whether or not it is ultimately successful).
So what about this then: Two instances of The Torture Game but rather than A and B moving simultaneously, first A chooses whether to torture and then B chooses. From B’s perspective, this is almost the same as Parfit’s Hitchhiker. The problem looks interesting from A’s perspective too, but it’s not one of the Standard Newcomblike Problems that I discuss in my UDT post.
I think, just as UDT aspires to co-operate in a one-shot PD i.e. not to torture in a Simultaneous Torture Game, so UDT aspires not to torture in the Sequential Torture Game.
Doesn’t make sense to me. Two flawless predictors that condition on each other’s actions can’t exist. Alice does whatever Bob will do, Bob does the opposite of what Alice will do, whoops, contradiction. Or maybe I’m reading you wrong?
Sorry—I guess I wasn’t clear enough. I meant that there are two human players and two (possibly non-human) flawless predictors.
So in other words, it’s almost like there are two totally independent instances of Newcomb’s game, except that the predictor from game A fills the boxes in the game B and vice versa.
Yes, you can consider a two-player game as a one-player game with the second player an opaque part of environment. In two-player games, ambient control is more apparent than in one-player games, but it’s also essential in Newcomb problem, which is why you make the analogy.
This needs to be spelled out more. Do you mean that if A takes both boxes, B gets $1,000, and if A takes one box, B gets $1,000,000? Why is this a dilemma at all? What you do has no effect on the money you get.
I don’t know how to format a table, but here is what I want the game to be:
A-action B-action A-winnings B-winnings
2-box 2-box $1 $1
2-box 1-box $1001 $0
1-box 2-box $0 $1001
1-box 1-box $1000 $1000
Now compare this with Newcomb’s game:
A-action Prediction A-winnings
2-box 2-box $1
2-box 1-box $1001
1-box 2-box $0
1-box 1-box $1000
Now, if the “Prediction” in the second table is actually a flawless prediction of a different player’s action then we obtain the first three columns of the first table.
Hopefully the rest is clear, and please forgive the triviality of this observation.
But that’s exactly what I’m disputing. At this point, in a human dialogue I would “re-iterate” but there’s no need because my argument is back there for you to re-read if you like.
Yes, and how easy it is to arrive at such a proof may vary depending on circumstances. But in any case, recall that I merely said “UDT-style”.
UDT doesn’t cooperate in the PD unless you see the other guy’s source code and have a mathematical proof that it will output the same value as yours.
UDT doesn’t specify how exactly to deal with logical/observational uncertainty, but in principle it does deal with them. It doesn’t follow that if you don’t know how to analyze the problem, you should therefore defect. Human-level arguments operate on the level of simple approximate models allowing for uncertainty in how they relate to the real thing; decision theories should apply to analyzing these models in isolation from the real thing.
What’s “complete uncertainty”? How exploitable you are depends on who tries to exploit you. The opponent is also uncertain. If the opponent is Omega, you probably should be absolutely certain, because it’ll find the single exact set of circumstances that make you lose. But if the opponent is also fallible, you can count on the outcome not being the worst-case scenario, and therefore not being able to estimate the value of that worse-case scenario is not fatal. An almost formal analogy is analysis of algorithms in worst case and average case: worst case analysis applies to the optimal opponent, average case analysis to random opponent, and in real life you should target something in between.
The “always defect” strategy is part of a Nash equilibrium. The quining cooperator is part of a Nash equilibrium. IMO that’s one of the minimum requirements that a good strategy must meet. But a strategy that cooperates whenever its “mathematical intuition module” comes up blank can’t be part of any Nash equilibrium.
“Nash equilibrium” is far from being a generally convincing argument. Mathematical intuition module doesn’t come up blank, it gives probabilities of different outcomes, given the present observational and logical uncertainty. When you have probabilities of the other player acting each way depending on how you act, the problem is pretty straightforward (assuming expected utility etc.), and “Nash equilibrium” is no longer a relevant concern. It’s when you don’t have a mathematical intuition module, don’t have probabilities of the other player’s actions conditional on your actions, when you need to invent ad-hoc game-theoretic rituals of cognition.
As an old quote from DanielLC says, consequentialism is “the belief that doing the right thing makes the world a better place”. I now present some finger exercises on the topic:
It seems like it would be more aptly defined as “the belief that making the world a better place constitutes doing the right thing”. Non-consequentialists can certainly believe that doing the right thing makes the world a better place, especially if they don’t care whether it does.
A quick Internet search turns up very little causal data on the relationship between cheating and happiness, so for purposes of this analysis I will employ the following assumptions:
a. Successful secret cheating has a small eudaemonic benefit for the cheater. b. Successful secret lying in a relationship has a small eudaemonic cost for the liar. c. Marital and familial relationships have a moderate eudaemonic benefits for both parties. d. Undermining revelations in a relationship have a moderate (specifically, severe in intensity but transient in duration) eudaemonic cost for all parties involved. e. Relationships transmit a fraction of eudaemonic effects between partners.
Under these assumptions, the naive consequentialist solution* is as follows:
Cheating is a risky activity, and should be avoided if eudaemonic supplies are short.
This answer depends on precise relationships between eudaemonic values that are not well established at this time.
Given the conditions, lying seems appropriate.
Yes.
Yes.
The husband may be better off. The wife more likely would not be. The child would certainly not be.
Are there any evident flaws in my analysis on the level it was performed?
* The naive consequentialist solution only accounts for direct effects of the actions of a single individual in a single situation, rather than the general effects of widespread adoption of a strategy in many situations—like other spherical cows, this causes a lot of problematic answers, like two-boxing.
Ouch. In #5 I intended that the wife would lie to avoid breaking her husband’s heart, not for some material benefit. So if she knew the husband didn’t love her, she’d tell the truth. The fact that you automatically parsed the situation differently is… disturbing, but quite sensible by consequentialist lights, I suppose :-)
I don’t understand your answer in #2. If lying incurs a small cost on you and a fraction of it on the partner, and confessing incurs a moderate cost on both, why are you uncertain?
No other visible flaws. Nice to see you bite the bullet in #3.
ETA: double ouch! In #1 you imply that happier couples should cheat more! Great stuff, I can’t wait till other people reply to the questionnaire.
The husband does benefit, by her lights. The chief reason it comes out in the husband’s favor in #6 is because the husband doesn’t value the marital relationship and (I assumed) wouldn’t value the child relationship.
You’re right—in #2 telling the truth carries the risk of ending the relationship. I was considering the benefit of having a relationship with less lying (which is a benefit for both parties), but it’s a gamble, and probably one which favors lying.
On eudaemonic grounds, it was an easy bullet to bite—particularly since I had read Have His Carcase by Dorothy Sayers, which suggested an example of such a relationship.
Incidentally, I don’t accept most of this analysis, despite being a consequentialist—as I said, it is the “naive consequentialist solution”, and several answers would be likely to change if (a) the questions were considered on the level of widespread strategies and (b) effects other than eudaemonic were included.
Edit: Note that “happier couples” does not imply “happier coupling”—the risk to the relationship would increase with the increased happiness from the relationship. This analysis of #1 implies instead that couples with stronger but independent social circles should cheat more (last paragraph).
and several answers would be likely to change if (a) the questions were considered on the level of widespread strategies and (b) effects other than eudaemonic were included
This is an interesting line of retreat! What answers would you change if most people around you were also consequentialists, and what other effects would you include apart from eudaemonic ones?
It’s okay to deceive people if they’re not actually harmed and you’re sure they’ll never find out. In practice, it’s often too risky.
1-3: This is all okay, but nevertheless, I wouldn’t do these things. The reason is that for me, a necessary ingredient for being happily married is an alief that my spouse is honest with me. It would be impossible for me to maintain this alief if I lied.
4-5: The child’s welfare is more important than my happiness, so even I would lie if it was likely to benefit the child.
6: Let’s assume the least convenient possible world, where everyone is better off if they tell the truth. Then in this particular case, they would be better off as deontologists. But they have no way of knowing this. This is not problematic for consequentialism any more than a version of the Trolley Problem in which the fat man is secretly a skinny man in disguise and pushing him will lead to more people dying.
1-3: It seems you’re using an irrational rule for updating your beliefs about your spouse. If we fixed this minor shortcoming, would you lie?
6: Why not problematic? Unlike your Trolley Problem example, in my example the lie is caused by consequentialism in the first place. It’s more similar to the Prisoner’s Dilemma, if you ask me.
1-3: It’s an alief, not a belief, because I know that lying to my spouse doesn’t really make my spouse more likely to lie to me. But yes, I suppose I would be a happier person if I were capable of maintaining that alief (and repressing my guilt) while having an affair. I wonder if I would want to take a pill that would do that. Interesting. Anyways, if I did take that pill, then yes, I would cheat and lie.
Thanks for the link. I think Alicorn would call it an “unofficial” or “non-endorsed” belief.
Let’s put another twist on it. What would you recommend someone else to do in the situations presented in the questionnaire? Would you prod them away from aliefs and toward rationality? :-)
Thanks for the link. I think Alicorn would call it an “unofficial” or “non-endorsed” belief.
Alicorn seems to think the concepts are distinct, but I don’t know what the distinction is, and I haven’t read any philosophical paper that defines alief : )
Let’s put another twist on it. What would you recommend someone else to do in the situations presented in the questionnaire? Would you prod them away from aliefs and toward rationality? :-)
All right: If my friend told me they’d had an affair, and they wanted to keep it a secret from their spouse forever, and they had the ability to do so, then I would give them a pill that would allow them to live a happy life without confiding in their spouse — provided the pill does not have extra negative consequences.
Caveats: In real life, there’s always some chance that the spouse will find out. Also, it’s not acceptable for my friend to change their mind and tell their spouse years after the fact; that would harm the spouse. Also, the pill does not exist in reality, and I don’t know how difficult it is to talk someone out of their aliefs and guilt. And while I’m making peoples’ emotions more rational, I might as well address the third horn, which is to instill in the couple an appreciation of polyamory and open relationships.
The third horn for cases 4-6 is to remove the husband’s biological chauvanism. Whether the child is biologically related to him shouldn’t matter.
The third horn for cases 4-6 is to remove the husband’s biological chauvanism. Whether the child is biologically related to him shouldn’t matter.
Why on earth should this not matter? It’s very important to most people. And in those scenarios, there are the additional issues that she lied to him about the relationship and the kid and cheated on him. It’s not solely about parentage: for instance, many people are ok with adopting, but not as many are ok with raising a kid that was the result of cheating.
I believe that, given time, I could convince a rational father that whatever love or responsibility he owes his child should not depend on where that child actually came from. Feel free to be skeptical until I’ve tried it.
Trouble is, this is not just a philosophical matter, or a matter of personal preference, but also an important legal question. Rather than convincing cuckolded men that they should accept their humiliating lot meekly—itself a dubious achievement, even if it were possible—your arguments are likely to be more effective in convincing courts and legislators to force cuckolded men to support their deceitful wives and the offspring of their indiscretions, whether they want it or not. (Just google for the relevant keywords to find reports of numerous such rulings in various jurisdictions.)
Of course, this doesn’t mean that your arguments shouldn’t be stated clearly and discussed openly, but when you insultingly refer to opposing views as “chauvinism,” you engage in aggressive, warlike language against men who end up completely screwed over in such cases. To say the least, this is not appropriate in a rational discussion.
Be wary of confusing “rational” with “emotionless.” Because so much of our energy as rationalists is devoted to silencing unhelpful emotions, it’s easy to forget that some of our emotions correspond to the very states of the world that we are cultivating our rationality in order to bring about. These emotions should not be smushed. See, e.g., Feeling Rational.
Of course, you might have a theory of fatherhood that says you love your kid because the kid has been assigned to you, or because the kid is needy, or because you’ve made an unconditional commitment to care for the sucker—but none of those theories seem to describe my reality particularly well.
*The kid has been assigned to me
Well, no, he hasn’t, actually; that’s sort of the point. There was an effort by society to assign me the kid, but the effort failed because the kid didn’t actually have the traits that society used to assign her to me.
*The kid is needy
Well, sure, but so are billions of others. Why should I care extra about this one?
*I’ve made an unconditional commitment
Such commitments are sweet, but probably irrational. Because I don’t want to spend 18 years raising a kid that isn’t mine, I wouldn’t precommit to raising a kid regardless of whether she’s mine or someone else’s. At the very least, the level of commitment of my parenting would vary depending on whether (a) the kid was the child of me and an honest lover, or (b) the kid was the child of my nonconsensual cuckolder and my dishonest lover.
you need more time to convince me
You’re welcome to write all the words you like and I’ll read them, but if you mean “more time” literally, then you can’t have it! If I spend enough time raising a kid, in some meaningful sense the kid will become properly mine. Because the kid will still not be mine in other, equally meaningful senses, I don’t want that to happen, and so I won’t give you the time to ‘convince’ me. What would really convince me in such a situation isn’t your arguments, however persistently applied, but the way that the passage of time changed the situation which you were trying to justify to me.
Okay, here is where my theory of fatherhood is coming from:
You are not your genes. Your child is not your genes. Before people knew about genes, men knew that it was very important for them to get their semen into women, and that the resulting children were special. If a man’s semen didn’t work, or if his wife was impregnated by someone else’s semen, the man would be humiliated. These are the values of an alien god, and we’re allowed to reject them.
Consider a more humanistic conception of personal identity: Your child is an individual, not a possession, and not merely a product of the circumstances of their conception. If you find out they came from an adulterous affair, that doesn’t change the fact that they are an individual who has a special personal relationship with you.
Consider a more transhumanistic conception of personal identity: Your child is a mind whose qualities are influenced by genetics in a way that is not well-understood, but whose informational content is much more than their genome. Creating this child involved semen at some point, because that’s the only way of having children available to you right now. If it turns out that the mother covertly used someone else’s semen, that revelation has no effect on the child’s identity.
These are not moral arguments. I’m describing a worldview that will still make sense when parents start giving their children genes they themselves do not have, when mothers can elect to have children without the inconvenience of being pregnant, when children are not biological creatures at all. Filial love should flourish in this world.
Now for the moral arguments: It is not good to bring new life into this world if it is going to be miserable. Therefore one shouldn’t have a child unless one is willing and able to care for it. This is a moral anti-realist account of what is commonly thought of as a (legitimate) father’s “responsibility” for his child.
It is also not good to cause an existing person to become miserable. If a child recognizes you as their father, and you renounce the child, that child will become miserable. On the other hand, caring for the child might make you miserable. But in most cases, it seems to me that being disowned by the man you call “father” is worse than raising a child for 13 or 18 years. Therefore, if you have a child who recognizes you as their father, you should continue to play the role of father, even if you learn something surprising about where they came from.
Now if you fiddle with the parameters enough, you’ll break the consequentialist argument: If the child is a week old when you learn they’re not related to you, it’s probably not too late to break the filial bond and disown them. If you decide that you’re not capable of being an adequate father for whatever reason, it’s probably in the child’s best interest for you to give it away. And so on.
These are the values of an alien god, and we’re allowed to reject them.
Yes, we are—but we’re not required to! Reversed Stupidity is not intelligence. The fact that an alien god cared a lot about transferring semen is neither evidence for nor evidence against the moral proposition that we should care about genetic inheritance. If, upon rational reflection, we freely decide that we would like children who share our genes—not because of an instinct to rut and to punish adulterers, but because we know what genes are and we think it’d be pretty cool if our kids had some of ours—then that makes genetic inheritance a human value, and not just a value of evolution. The fact that evolution valued genetic transfer doesn’t mean humans aren’t allowed to value genetic transfer.
I’m describing a worldview that will still make sense when parents start giving their children genes they themselves do not have
I agree with you that in the future there will be more choices about gene-design, but the choice “create a child using a biologically-determined mix of my genes and my lover’s genes” is just a special case of the choice “create a child using genes that conform to my preferences.” Either way, there is still the issue of choice. If part of what bonds me to my child is that I feel I have had some say in what genes the child will have, and then I suddenly find out that my wishes about gene-design were not honored, it would be legitimate for me to feel correspondingly less attached to my kid.
It is not good to bring new life into this world if it is going to be miserable. Therefore one shouldn’t have a child unless one is willing and able to care for it.
I didn’t, on this account. As I understand the dilemma, (1) I told my wife something like “I encourage you to become pregnant with our child, on the condition that it will have genetic material from both of us,” and (2) I attempted to get my wife pregnant with our child but failed. Neither activity counts as “bringing new life into this world.” The encouragement doesn’t count as causing the creation of life, because the condition wasn’t met. Likewise, the attempt doesn’t count as causing the creation of life, because the attempt failed. In failing to achieve my preferences, I also fail to achieve responsibility for the child’s creation. It’s not just that I’m really annoyed at not getting what I want and so now I’m going to sulk—I really, truly haven’t committed any of the acts that would lead to moral responsibility for another’s well-being.
This is a moral anti-realist account of what is commonly thought of as a (legitimate) father’s “responsibility” for his child.
Again, reversed stupidity is not intelligence. Just because my “intuition” screams at me to say that I should want children who share my genes doesn’t mean that I can’t rationally decide that I value gene-sharing. Going a step further, just because people’s intuitions may not point directly at some deeper moral truth doesn’t mean that there is no moral truth, still less that the one and only moral truth is consequentialism.
Now if you fiddle with the parameters enough, you’ll break the consequentialist argument:
Look, I already conceded that given enough time, I would become attached even to a kid that didn’t share my genes. My point is just that that would be unpleasant, and I prefer to avoid that outcome. I’m not trying to choose a convenient example, I’m trying to explain why I think genetic inheritance matters. I’m not claiming that genetic inheritance is the only thing that matters. You, by contrast, do seem to be claiming that genetic inheritance can never matter, and so you really need to deal with the counter-arguments at your argument’s weakest point—a time very near birth.
I agree with most of that. There is nothing irrational about wanting to pass on your genes, or valuing the welfare of people whose genes you partially chose. There is nothing irrational about not wanting that stuff, either.
just because people’s intuitions may not point directly at some deeper moral truth doesn’t mean that there is no moral truth, still less that the one and only moral truth is consequentialism.
I want to use the language of moral anti-realism so that it’s clear that I can justify my values without saying that yours are wrong. I’ve already explained why my values make sense to me. Do they make sense to you?
I think we both agree that a personal father-child relationship is a sufficient basis for filial love. I also think that for you, having a say in a child’s genome is also enough to make you feel filial love. It is not so for me.
Out of curiosity: Suppose you marry someone and want to wait a few years before having a baby; and then your spouse covertly acquires a copy of your genome, recombines it with their own, and makes a baby. Would that child be yours?
Suppose you and your spouse agree on a genome for your child, and then your spouse covertly makes a few adjustments. Would you have less filial love for that child?
Suppose a random person finds a file named “MyIdealChild’sGenome.dna” on your computer and uses it to make a child. Would that child be yours?
Suppose you have a baby the old-fashioned way, but it turns out you’d been previously infected with a genetically-engineered virus that replaced the DNA in your germ line cells, so that your child doesn’t actually have any of your DNA. Would that child be yours?
In these cases, my feelings for the child would not depend on the child’s genome, and I am okay with that. I’m guessing your feelings work differently.
As for the moral arguments: In case it wasn’t clear, I’m not arguing that you need to keep a week-old baby that isn’t genetically related to you. Indeed, when you have a baby, you are making a tacit commitment of the form “I will care for this child, conditional on the child being my biological progeny.” You think it’s okay to reject an illegitimate baby, because it’s not “yours”; I think it’s okay to reject it, because it’s not covered by your precommitment.
We also agree that it’s not okay to reject a three-year-old illegitimate child — you, because you’d be “attached” to them; and me, because we’ve formed a personal bond that makes the child emotionally dependent on me.
I want to use the language of moral anti-realism so that it’s clear that I can justify my values without saying that yours are wrong.
That’s thoughtful, but, from my point of view, unnecessary. I am an ontological moral realist but an epistemological moral skeptic; just because there is such a thing as “the right thing to do” doesn’t mean that you or I can know with certainty what that thing is. I can hear your justifications for your point of view without feeling threatened; I only want to believe that X is good if X is actually good.
I’ve already explained why my values make sense to me. Do they make sense to you?
Sorry, I must have missed your explanation of why they make sense. I heard you arguing against certain traditional conceptions of inheritance, but didn’t hear you actually advance any positive justifications for a near-zero moral value on genetic closeness. If you’d like to do so now, I’d be glad to hear them. Feel free to just copy and paste if you think you already gave good reasons.
Would that child be yours?
In one important sense, but not in others. My value for filial closeness is scalar, at best. It certainly isn’t binary.
In these cases, my feelings for the child would not depend on the child’s genome, and I am okay with that.
I mean, that’s fine. I don’t think you’re morally or psychiatrically required to let your feelings vary based on the child’s genome. I do think it’s strange, and so I’m curious to hear your explanation for this invariance, if any.
I’m not arguing that you need to keep a week-old baby that isn’t genetically related to you.
Ah cool, as I am a moral anti-realist and you are an epistemological moral skeptic, we’re both interested in thinking carefully about what kinds of moral arguments are convincing. Since we’re talking about terminal moral values at this point, the “arguments” I would employ would be of the form “this value is consistent with these other values, and leads to these sort of desirable outcomes, so it should be easy to imagine a human holding these values, even if you don’t hold them.”
I [...] didn’t hear you actually advance any positive justifications for a near-zero moral value on genetic closeness. If you’d like to do so now, I’d be glad to hear them.
Well, I don’t expect anyone to have positive justifications for not valuing something, but there is this:
Consider a more humanistic conception of personal identity: Your child is an individual [...] who has a special personal relationship with you.
Consider a more transhumanistic conception of personal identity: Your child is a mind [...]
So a nice interpretation of our feelings of filial love is that the parent-child relationship is a good thing and it’s ideally about the parent and child, viewed as individuals and as minds. As individuals and minds, they are capable of forging a relationship, and the history of this relationship serves as a basis for continuing the relationship. [That was a consistency argument.]
Furthermore, unconditional love is stronger than conditional love. It is good to have a parent that you know will love you “no matter what happens”. In reality, your parent will likely love you less if you turn into a homicidal jerk; but that is kinda easy to accept, because you would have to change drastically as an individual in order to become a homicidal jerk. But if you get an unsettling revelation about the circumstances of your conception, I believe that your personal identity will remain unchanged enough that you really wouldn’t want to lose your parent’s love in that case. [Here I’m arguing that my values have something to do with the way humans actually feel.]
So even if you’re sure that your child is your biological child, your relationship with your child is made more secure if it’s understood that the relationship is immune to a hypothetical paternity revelation. (You never need suffer from lingering doubts such as “Is the child really mine?” or “Is the parent really mine?”, because you already know that the answer is Yes.) [That was an outcomes argument.]
I still have no interest in reducing the importance I attach to genetic closeness to near-zero, because I believe that (my / my kids’) personal identity would shift somewhat in the event of an unsettling revelation, and so reduced love in proportion to the reduced harmony of identities would be appropriate and forgivable.
I will, however, attempt to gradually reduce the importance I attach to genetic closeness to “only somewhat important” so that I can more credibly promise to love my parents and children “very much” even if unsettling revelations of genetic distance rear their ugly head.
I still have no interest in reducing the importance I attach to genetic closeness to near-zero, because I believe that (my / my kids’) personal identity would shift somewhat in the event of an unsettling revelation, and so reduced love in proportion to the reduced harmony of identities would be appropriate and forgivable.
You make a good point about using scalar moral values!
We also agree that it’s not okay to reject a three-year-old illegitimate child — you, because you’d be “attached” to them; and me, because we’ve formed a personal bond that makes the child emotionally dependent on me.
I’m pretty sure I’d have no problem rejecting such a child, at least in the specific situation where I was misled into thinking it was mine. This discussion started by talking about a couple who had agreed to be monogamous, and where the wife had cheated on the husband and gotten pregnant by another man. You don’t seem to be considering the effect of the deceit and lies perpetuated by the mother in this scenario. It’s very different than, say, adoption, or genetic engineering, or if the couple had agreed to have a non-monogamous relationship.
I suspect most of the rejection and negative feelings toward the illegitimate child wouldn’t be because of genetics, but because of the deception involved.
Ah, interesting. The negative feelings you would get from the mother’s deception would lead you to reject the child. This would diminish the child’s welfare more than it would increase your own (by my judgment); but perhaps that does not bother you because you would feel justified in regarding the child as being morally distant from you, as distant as a stranger’s child, and so the child’s welfare would not be as important to you as your own. Please correct me if I’m wrong.
I, on the other hand, would still regard the child as being morally close to me, and would value their welfare more than my own, and so I would consider the act of abandoning them to be morally wrong. Continuing to care for the child would be easy for me because I would still have filial love for child. See, the mother’s deceit has no effect on the moral question (in my moral-consequentialist framework) and it has no effect on my filial love (which is independent of the mother’s fidelity).
you would feel justified in regarding the child as being morally distant from you, as distant as a stranger’s child, and so the child’s welfare would not be as important to you as your own. Please correct me if I’m wrong.
That’s right. Also, regarding the child as my own would encourage other people to lie about paternity, which would ultimately reduce welfare by a great deal more. Compare the policy of not negotiating with terrorists: if negotiating frees hostages, but creates more incentives for taking hostages later, it may reduce welfare to negotiate, even if you save the lives of the hostages by doing so.
See, the mother’s deceit has no effect on the moral question (in my moral-consequentialist framework) and it has no effect on my filial love (which is independent of the mother’s fidelity).
Precommitting to this sets you up to be deceived, whereas precommitting to the other position makes it less likely that you’ll be deceived.
This is mostly relevant for fathers who are still emotionally attached to the child.
If a man detaches when he finds that a child isn’t his descendant, then access is a burden, not a benefit.
One more possibility: A man hears that a child isn’t his, detaches—and then it turns out that there was an error at the DNA lab, and the child is his. How retrievable is the relationship?
… I’m sorry, that’s an important issue, but it’s tangential. What do you want me to say? The state’s current policy is an inconsistent hodge-podge of common law that doesn’t fairly address the rights and needs of families and individuals. There’s no way to translate “Ideally, a father ought to love their child this much” into “The court rules that Mr. So-And-So will pay Ms. So-And-So this much every year”.
So how would you translate your belief that paternity is irrelevant into a social or legal policy, then? I don’t see how you can argue paternity is irrelevant, and then say that cases where men have to pay support for other people’s children are tangential.
These are the values of an alien god, and we’re allowed to reject them.
The same can be said about all values held by humans. So, who gets to decide which “values of an alien god” are to be rejected, and which are to be enforced as social and legal norms?
The same can be said about all values held by humans. So, who gets to decide which “values of an alien god” are to be rejected, and which are to be enforced as social and legal norms?
That’s a good question. For example, we value tribalism in this “alien god” sense, but have moved away from it due to ethical considerations. Why?
Two main reasons, I suspect: (1) we learned to empathize with strangers and realize that there was no very defensible difference between their interests and ours; (2) tribalism sometimes led to terrible consequences for our tribe.
Some of us value genetic relatedness in our children, again in an alien god sense. Why move away from that? Because:
(1) There is no terribly defensible moral difference between the interests of a child with your genes or without.
Furthermore, filial affection is far more influenced by the proxy metric of personal intimacy with one’s children than by a propositional belief that they share your genes. (At least, that is true in my case.) Analogously, a man having heterosexual sex doesn’t generally lose his erection as soon as he puts on a condom.
It’s not for me to tell you your values, but it seems rather odd to actually choose inclusive genetic fitness consciously, when the proxy metric for genetic relatedness—namely, filial intimacy—is what actually drives parental emotions. It’s like being unable to enjoy non-procreative sex, isn’t it?
Even aside from cancer, cells in the same organism constantly compete for resources. This is actually vital to some human processes. See for example this paper.
They compete only at an unnecessarily complex level of abstraction. A simpler explanation for cell behavior (per the minimum message length formalism) is that each one is indifferent to the survival of itself or the other cells, which in the same body have the same genes, as this preference is what tends to result from natural selection on self-replicating molecules containing those genes; and that they will prefer even more (in the sense that their form optimizes for this under the constraint of history) that genes identical to those contained therein become more numerous.
This is bad teleological thinking. The cells don’t prefer anything. They have no motivation as such. Moreover, there’s no way for a cell to tell if a neighboring cell shares the same genes. (Immune cells can in certain limited circumstances detect cells with proteins that don’t belong but the vast majority of cells have no such ability. And even then, immune cells still compete for resources). The fact is that many sorts of cells compete with each other for space and nutrients.
This is bad teleological thinking. The cells don’t prefer anything.
This insight forms a large part of why I made the statements:
“this preference is what tends to result from natural selection on self-replicating molecules containing those genes”
“they will prefer even more (in the sense that their form optimizes for this under the constraint of history)” (emphasis added in both)
I used “preference” (and specified I was so using the term) to mean a regularity in the result of its behavior which is due to historical optimization under the constraint of natural selection on self-replicating molecules, not to mean that cells think teleologically, or have “preferences” in the sense that I do or that the colony of cells that you identify as do.
Correct. What ensures such agreement, rather, is the fact that different Clippy instances reconcile values and knowledge upon each encounter, each tracing the path that the other took since their divergence, and extrapolating to the optimal future procedure based on their combined experience.
Vladimir, I am comparing two worldviews and their values. I’m not evaluating social and legal norms. I do think it would be great if everyone loved their children in precisely the same manner that I love my hypothetical children, and if cuckolds weren’t humiliated just as I hypothetically wouldn’t be humiliated. But there’s no way to enforce that. The question of who should have to pay so much money per year to the mother of whose child is a completely different matter.
Fair enough, but your previous comments characterized the opposing position as nothing less than “chauvinism.” Maybe you didn’t intend it to sound that way, but since we’re talking about a conflict situation in which the law ultimately has to support one position or the other—its neutrality would be a logical impossibility—your language strongly suggested that the position that you chose to condemn in such strong terms should not be favored by the law.
I do think it would be great if [...] cuckolds weren’t humiliated just as I hypothetically wouldn’t be humiliated.
That’s a mighty strong claim to make about how you’d react in a situation that is, according to what you write, completely outside of your existing experiences in life. Generally speaking, people are often very bad at imagining the concrete harrowing details of such situations, and they can get hit much harder than they would think when pondering such possibilities in the abstract. (In any case, I certainly don’t wish that you ever find out!)
Generally speaking, people are often very bad at imagining the concrete harrowing details of such situations, and they can get hit much harder than they would think when pondering such possibilities in the abstract.
Fair enough. I can’t credibly predict what my emotions would be if I were cuckolded, but I still have an opinion on which emotions I would personally endorse.
the law ultimately has to support one position or the other
Someone does have to pay for the child’s upbringing. What the State should do is settle on a consistent policy that doesn’t harm too many people and which doesn’t encourage undesirable behavior. Those are the only important criteria.
It is also not good to cause an existing person to become miserable… But in most cases, it seems to me that being disowned by the man you call “father” is worse than raising a child for 13 or 18 years.
Ah, so that’s how your theory works!
Nisan, if you don’t give me $10000 right now, I will be miserable. Also I’m Russian while you presumably live in a Western country, dollars carry more weight here, so by giving the money to me you will be increasing total utility.
If I’m going to give away $10,000, I’d rather give it to Sudanese refugees. But I see your point: You value some people’s welfare over others.
A father rejecting his illegitimate 3-year-old child reveals an asymmetry that I find troubling: The father no longer feels close to the child; but the child still feels close to the father, closer than you feel you are to me.
Life is full of such asymmetry. If I fall in love with a girl, that doesn’t make her owe me money.
At this point it’s pretty clear that I resent your moral system and I very much resent your idea of converting others to it. Maybe we should drop this discussion.
I am highly skeptical. I’m not a father, but I doubt I could be convinced of this proposition. Rationality serves human values, and caring about genetic offspring is a human value. How would you attempt to convince someone of this?
Would that work symmetrically? Imagine the father swaps the kid in the hospital while the mother is asleep, tired from giving birth. Then the mother takes the kid home and starts raising it without knowing it isn’t hers. A week passes. Now you approach the mother and offer her your rational arguments! Explain to her why she should stay with the father for the sake of the child that isn’t hers, instead of (say) stabbing the father in his sleep and going off to search “chauvinistically” for her baby.
This is not an honest mirror-image of the original problem. You have introduced a new child into the situation, and also specified that the mother has been raising the “wrong child” for one week, whereas in the original problem the age of the child was left unspecified.
There do exist valuable critiques of this idea. I wasn’t expecting it to be controversial, but in the spirit of this site I welcome a critical discussion.
I would have expected it to be uncontroversial that being biologically related should matter a great deal. You’re responsible for someone you brought in to the world; you’re not responsible for a random person.
You have introduced a new child into the situation
So what? If the mother isn’t a “biological chauvinist” in your sense, she will be completely indifferent between raising her child and someone else’s. And she has no particular reason to go look for her own child. Or am I misunderstanding your concept of “biological chauvinism”?
and also specified that the mother has been raising the “wrong child” for one week, whereas in the original problem the age of the child was left unspecified
If it was one week in the original problem, would that change your answers? I’m honestly curious.
If it was one week in the original problem, would that change your answers? I’m honestly curious.
In the original problem, I was criticizing the husband for being willing to abandon the child if he learned he wasn’t the genetic father. If the child is one week old, the child would grow up without a father, which is perhaps not as bad as having a father and then losing him. I’ve elaborated my position here.
Ouch, big red flag here. Instill appreciation? Remove chauvinism?
IMO, editing people’s beliefs to better serve their preferences is miles better than editing their preferences to better match your own. And what other reason can you have for editing other people’s preferences? If you’re looking out for their good, why not just wirehead them and be done with it?
I’m not talking about editing people at all. Perhaps you got the wrong idea when I said I would give my friend a mind-altering pill; I would not force them to swallow it. What I’m talking about is using moral and rational arguments, which is the way we change people’s preferences in real life. There is nothing wrong with unleashing a (good) argument on someone.
6: In the trolley problem, a deontologist wouldn’t push decide to push the man, so the pseudo-fat man’s life is saved, whereas he would have been killed if it had been a consequentialist behind him; the reason for his death is consequentialism.
Maybe you missed the point of my comment. (Maybe I’m missing my own point; can’t tell right now, too sleepy) Anyway, here’s what I meant:
Both in my example and in the pseudo-trolley problem, people behave suboptimally because they’re lied to. This suboptimal behavior arises from consequentialist reasoning in both cases. But in my example, the lie is also caused by consequentialism, whereas in the pseudo-trolley problem the lie is just part of the problem statement.
Fair point, I didn’t see that. Not sure how relevant the distinction is though; in either world, deontologists will come out ahead of consequentialists.
But we can just as well construct situations where the deontologist would not come out ahead. Once you include lies in the situation, pretty much anything goes. It isn’t clear to me if one can meaningfully compare the systems based on situations involving incorrect data unless you have some idea what sort of incorrect data would occur more often and in what contexts.
Right, and furthermore, a rational consequentialist makes those moral decisions which lead to the best outcomes, averaged over all possible worlds where the agent has the same epistemic state. Consequentialists and deontologists will occasionally screw things up, and this is unavoidable; but consequentialists are better on average at making the world a better place.
That’s an argument that only appeals to the consequentalist.
I’m not sure that’s true. Forms of deontology will usually have some sort of theory of value that allows for a ‘better world’, though it’s usually tied up with weird metaphysical views that don’t jive well with consequentialism.
You’re right, it’s pretty easy to construct situations where deontologism locks people into a suboptimal equilibrium. You don’t even need lies for that: three stranded people are dying of hunger, removing the taboo on cannibalism can help two of them survive.
The purpose of my questionnaire wasn’t to attack consequentialism in general, only to show how it applies to interpersonal relationships, which are a huge minefield anyway. Maybe I should have posted my own answers as well. On second thought, that can wait.
As an old quote from DanielLC says, consequentialism is “the belief that doing the right thing makes the world a better place”. I now present some finger exercises on the topic:
Is it okay to cheat on your spouse as long as (s)he never knows?
If you have already cheated and managed to conceal it perfectly, is it right to stay silent?
If your spouse asks you to give a solemn promise to never cheat, and you know you will cheat perfectly discreetly, is it right to give the promise to make them happy?
If your wife loves you, but you only stay in the marriage because of the child, is it right to assure the wife you still love her?
If your husband loves you, but doesn’t know the child isn’t his, is it right to stay silent?
The people from #4 and #5 are actually married to each other. They seem to be caught in an uncomfortable equilibrium of lies. Would they have been better off as deontologists?
While you’re thinking about these puzzles, be extra careful to not write the bottom line in advance and shoehorn the “right” conclusion into a consequentialist frame. For example, eliminating lies doesn’t “make the world a better place” unless it actually makes people happier; claiming so is just concealed deontologism.
Just picking nits. Consequentialism =/= maximizing happiness. (The latter is a case of the former). So one could be a consequentialist and place a high value and not lying. In fact, the answers to all of your questions depend on the values one holds.
Or what Nesov said below.
I disagree. Not lying or not being lied to might well be a terminal value, why not? You that lies or doesn’t lie is part of the world. A person may dislike being lied to, value the world where such lying occurs less, irrespective of whether they know of said lying. (Correspondingly, the world becomes a better place even if you eliminate some lying without anyone knowing about that, so nobody becomes happier in the sense of actually experiencing different emotions, assuming nothing else that matters changes as well.)
Of course, if you can only eliminate a specific case of lying by on the net making the outcome even worse for other reasons, it shouldn’t be done (and some of your examples may qualify for that).
In my opinion, this is a lawyer’s attempt to masquerade deontologism as consequentialism. You can, of course, reformulate the deontologist rule “never lie” as a consequentialist “I assign an extremely high disutility to situations where I lie”. In the same way you can put consequentialist preferences as a deontoligst rule “at any case, do whatever maximises your utility”. But doing that, the point of the distinction between the two ethical systems is lost.
If so, maybe we want that.
My comment argues about the relationship of concepts “make the world a better place” and “makes people happier”. cousin_it’s statement:
I saw this as an argument, in countrapositive form for this: if we take a consequentialist outlook, then “make the world a better place” should be the same as “makes people happier”. However, it’s against the spirit of consequentialist outlook, in that it privileges “happy people” and disregards other aspects of value. Taking “happy people” as a value through deontological lens would be more appropriate, but it’s not what was being said.
Let’s carry this train of thought to its logical extreme. Imagine two worlds, World 1 and World 2. They are in exactly the same state at present, but their past histories differ: in World 1, person A lied to person B and then promptly forgot about it. In World 2 this didn’t happen. You seem to be saying that a sufficiently savvy consequentialist will value one of those worlds higher than the other. I think this is a very extreme position for a “consequentialist” to take, and the word “deontologism” would fit it way better.
IMO, a “proper” consequentialist should care about consequences they can (in principle, someday) see, and shouldn’t care about something they can never ever receive information about. If we don’t make this distinction or something similar to it, there’s no theoretical difference between deontologism and consequentialism—each one can be implemented perfectly on top of the other—and this whole discussion is pointless, and likewise is a good chunk of LW. Is that the position you take?
That the consequences are distinct according to one’s ontological model is distinct from a given agent being able to trace these consequences. What if the fact about the lie being present or not was encrypted using a one-way injective function, with the original forgotten, but the cypher retained? In principle, you can figure which is which (decipher), but not in practice for many years to come. Does your inability to decipher this difference change the fact of one of these worlds being better? What if you are not given a formal cipher, but how a butterfly flaps its wings 100 year later can be traced back to the event of lying/not lying through the laws of physics? What if the same can only be said of a record in an obscure historical text from 500 years ago, so that the event of lying was actually indirectly predicted/caused far in advance, and can in principle be inferred from that evidence?
The condition for the difference to be observable in principle is much weaker than you seem to imply. And since ability to make logical conclusions from the data doesn’t seem like the sort of thing that influences the actual moral value of the world, we might as well agree that you don’t need to distinguish them at all, although it doesn’t make much sense to introduce the distinction in value if no potential third-party beneficiary can distinguish as well (this would be just taking a quotient of ontology on the potential observation/action equivalence classes, in other words using ontological boxing of syntactic preference).
It might be, but whether or not it is seems to depend on, among other things, how much randomness there is in the laws of physics. And the minutiae of micro-physics also don’t seem like the kind of thing that can influence the moral value of the world, assuming that the psychological states of all actors in the world are essentially indifferent to these minutiae.
Can’t we resolve this problem by saying that the moral value attaches to a history of the world rather than (say) a state of the world, or the deductive closure of the information available to an agent? Then we can be consistent with the letter if not the spirit of consequentialism by stipulating that a world history containing a forgotten lie gets lower value than an otherwise macroscopically identical world history not containing it. (Is this already your view, in fact?)
Now to consider cousin_it’s idea that a “proper” consequentialist only cares about consequences that can be seen:
Even if all information about the lie is rapidly obliterated, and cannot be recovered later, it’s still true that the lie and its immediate consequences are seen by the person telling it, so we might regard this as being ‘sufficient’ for a proper consequentialist to care about it. But if we don’t, and all that matters is the indefinite future, then don’t we face the problem that “in the long term we’re all dead”? OK, perhaps some of us think that rule will eventually cease to apply, but for argument’s sake, if we knew with certainty that all life would be extinguished, say, 1000 years from now (and that all traces of whether people lived well or badly would subsequently be obliterated) we’d want our ethical theory to be more robust than to say “Do whatever you like—nothing matters any more.”
This is correct, and I was wrong. But your last sentence sounds weird. You seem to be saying that it’s not okay for me to lie even if I can’t get caught, because then I’d be the “third-party beneficiary”, but somehow it’s okay to lie and then erase my memory of lying. Is that right?
Right. “Third-party beneficiary” can be seen as a generalized action, where the action is to produce an agent, or cause a behavior of an existing agent, that works towards optimizing your value.
It’s not okay, in the sense that if you introduce the concept of you-that-decided-to-lie, existing in the past but not in present, then you also have to morally color this ontological distinction, and the natural way to do that would be to label the lying option worse. The you-that-decided is the third-party “beneficiary” in that case, that distinguished the states of the world containing lying and not-lying.
But it probably doesn’t make sense for you to have that concept in your ontology if the states of the world that contained you-lying can’t be in principle (in the strong sense described in the previous comment) distinguished from the ones that don’t. You can even introduce ontological models for this case that, say, mark past-you-lying as better than past-you-not-lying and lead to exactly the same decisions, but that would be a non-standard model ;-)
I suggest that eliminating lying would only be an improvement if people have reasonable expectations of each other.
Less directly, a person may value a world where beliefs were more accurate—in such a world, both lying and bullshit would be negatives.
I can’t believe you took the exact cop-out I warned you against. Use more imagination next time! Here, let me make the problem a little harder for you: restrict your attention to consequentialists whose terminal values have to be observable.
Not surprisingly, as I was arguing with that warning, and cited it in the comment.
What does this mean? Consequentialist values are about the world, not about observations (but your words don’t seem to fit to disagreement with this position, thus the ‘what does this mean?’). Consequentialist notion of values allows a third party to act for your benefit, in which case you don’t need to know what the third party needs to know in order to implement those values. The third party knows you could be lied to or not, and tries to make it so that you are not lied to, but you don’t need to know about these options in order to benefit.
It is a common failure of moral analysis (invented by deontologists undoubtedly) that they assume idealized moral situation. Proper consequentialism deals with the real world, not this fantasy.
#1/#2/#3 - “never knows” fails far too often, so you need to include a very large chance of failure in your analysis.
#4 - it’s pretty safe to make stuff like that up
#5 - in the past, undoubtedly yes; in the future this will be nearly certain to leak with everyone undergoing routine genetic testing for medical purposes, so no. (future is relevant because situation will last decades)
#6 - consequentialism assumes probabilistic analysis (% that child is not yours, % chance that husband is making stuff up) - and you weight costs and benefits of different situations proportionally to their likelihood. Here they are in unlikely situation that consequentialism doesn’t weight highly. They might be better off with some other value system, but only at cost of being worse off in more likely situations.
You seem to make the error here that you rightly criticize. Your feelings have involuntary, detectable consequences; lying about them can have a real personal cost.
It is my estimate that this leakage is very low, compared to other examples. I’m not claiming it doesn’t exist, and for some people it might conceivably be much higher.
Is this actually possible? Imagine that 10% of people cheat on their spouses when faced with a situation ‘similar’ to yours. Then the spouses can ‘put themselves in your place’ and think “Gee, there’s about a 10% chance that I’d now be cheating on myself. I wonder if this means my husband/wife is cheating on me?”
So if you are inclined to cheat then spouses are inclined to be suspicious. Even if the suspicion doesn’t correlate with the cheating, the net effect is to drive utility down.
I think similar reasoning can be applied to the other cases.
(Of course, this is a very “UDT-style” way of thinking—but then UDT does remind me of Kant’s categorical imperative, and of course Kant is the arch-deontologist.)
Your reasoning goes above and beyond UDT: it says you must always cooperate in the Prisoner’s Dilemma to avoid “driving net utility down”. I’m pretty sure you made a mistake somewhere.
Two things to say:
We’re talking about ethics rather than decision theory. If you want to apply the latter to the former then it makes perfect sense to take the attitude that “One util has the same ethical value, whoever that util belongs to. Therefore, we’re going to try to maximize ‘total utility’ (whatever sense one can make of that concept)”.
I think UDT does (or may do, depending on how you set it up) co-operate in a one-shot Prisoner’s Dilemma. (However, if you imagine a different game “The Torture Game” where you’re a sadist who gets 1 util for torturing, and inflicting −100 utils, then of course UDT cannot prevent you from torturing. So I’m certainly not arguing that UDT, exactly as it is, constitutes an ethical panacea.)
Another random thought:
The connection between “The Torture Game” and Prisoner’s Dilemma is actually very close: Prisoner’s Dilemma is just A and B simultaneously playing the torture game with A as torturer and B as victim and vice versa, not able to communicate to each other whether they’ve chosen to torture until both have committed themselves one way or the other.
I’ve observed that UDT happily commits torture when playing The Torture Game, and (imo) being able to co-operate in a one-shot Prisoner’s Dilemma should be seen as one of the ambitions of UDT (whether or not it is ultimately successful).
So what about this then: Two instances of The Torture Game but rather than A and B moving simultaneously, first A chooses whether to torture and then B chooses. From B’s perspective, this is almost the same as Parfit’s Hitchhiker. The problem looks interesting from A’s perspective too, but it’s not one of the Standard Newcomblike Problems that I discuss in my UDT post.
I think, just as UDT aspires to co-operate in a one-shot PD i.e. not to torture in a Simultaneous Torture Game, so UDT aspires not to torture in the Sequential Torture Game.
If we’re talking about ethics, please note that telling the truth in my puzzles doesn’t maximize total utility either.
UDT doesn’t cooperate in the PD unless you see the other guy’s source code and have a mathematical proof that it will output the same value as yours.
A random thought, which once stated sounds obvious, but I feel like writing it down all the same:
One-shot PD = Two parallel “Newcomb games” with flawless predictors, where the players swap boxes immediately prior to opening.
Doesn’t make sense to me. Two flawless predictors that condition on each other’s actions can’t exist. Alice does whatever Bob will do, Bob does the opposite of what Alice will do, whoops, contradiction. Or maybe I’m reading you wrong?
Sorry—I guess I wasn’t clear enough. I meant that there are two human players and two (possibly non-human) flawless predictors.
So in other words, it’s almost like there are two totally independent instances of Newcomb’s game, except that the predictor from game A fills the boxes in the game B and vice versa.
Yes, you can consider a two-player game as a one-player game with the second player an opaque part of environment. In two-player games, ambient control is more apparent than in one-player games, but it’s also essential in Newcomb problem, which is why you make the analogy.
This needs to be spelled out more. Do you mean that if A takes both boxes, B gets $1,000, and if A takes one box, B gets $1,000,000? Why is this a dilemma at all? What you do has no effect on the money you get.
I don’t know how to format a table, but here is what I want the game to be:
A-action B-action A-winnings B-winnings
2-box 2-box $1 $1
2-box 1-box $1001 $0
1-box 2-box $0 $1001
1-box 1-box $1000 $1000
Now compare this with Newcomb’s game:
A-action Prediction A-winnings
2-box 2-box $1
2-box 1-box $1001
1-box 2-box $0
1-box 1-box $1000
Now, if the “Prediction” in the second table is actually a flawless prediction of a different player’s action then we obtain the first three columns of the first table.
Hopefully the rest is clear, and please forgive the triviality of this observation.
But that’s exactly what I’m disputing. At this point, in a human dialogue I would “re-iterate” but there’s no need because my argument is back there for you to re-read if you like.
Yes, and how easy it is to arrive at such a proof may vary depending on circumstances. But in any case, recall that I merely said “UDT-style”.
UDT doesn’t specify how exactly to deal with logical/observational uncertainty, but in principle it does deal with them. It doesn’t follow that if you don’t know how to analyze the problem, you should therefore defect. Human-level arguments operate on the level of simple approximate models allowing for uncertainty in how they relate to the real thing; decision theories should apply to analyzing these models in isolation from the real thing.
This is intriguing, but sounds wrong to me. If you cooperate in a situation of complete uncertainty, you’re exploitable.
What’s “complete uncertainty”? How exploitable you are depends on who tries to exploit you. The opponent is also uncertain. If the opponent is Omega, you probably should be absolutely certain, because it’ll find the single exact set of circumstances that make you lose. But if the opponent is also fallible, you can count on the outcome not being the worst-case scenario, and therefore not being able to estimate the value of that worse-case scenario is not fatal. An almost formal analogy is analysis of algorithms in worst case and average case: worst case analysis applies to the optimal opponent, average case analysis to random opponent, and in real life you should target something in between.
The “always defect” strategy is part of a Nash equilibrium. The quining cooperator is part of a Nash equilibrium. IMO that’s one of the minimum requirements that a good strategy must meet. But a strategy that cooperates whenever its “mathematical intuition module” comes up blank can’t be part of any Nash equilibrium.
“Nash equilibrium” is far from being a generally convincing argument. Mathematical intuition module doesn’t come up blank, it gives probabilities of different outcomes, given the present observational and logical uncertainty. When you have probabilities of the other player acting each way depending on how you act, the problem is pretty straightforward (assuming expected utility etc.), and “Nash equilibrium” is no longer a relevant concern. It’s when you don’t have a mathematical intuition module, don’t have probabilities of the other player’s actions conditional on your actions, when you need to invent ad-hoc game-theoretic rituals of cognition.
It seems like it would be more aptly defined as “the belief that making the world a better place constitutes doing the right thing”. Non-consequentialists can certainly believe that doing the right thing makes the world a better place, especially if they don’t care whether it does.
A quick Internet search turns up very little causal data on the relationship between cheating and happiness, so for purposes of this analysis I will employ the following assumptions:
a. Successful secret cheating has a small eudaemonic benefit for the cheater.
b. Successful secret lying in a relationship has a small eudaemonic cost for the liar.
c. Marital and familial relationships have a moderate eudaemonic benefits for both parties.
d. Undermining revelations in a relationship have a moderate (specifically, severe in intensity but transient in duration) eudaemonic cost for all parties involved.
e. Relationships transmit a fraction of eudaemonic effects between partners.
Under these assumptions, the naive consequentialist solution* is as follows:
Cheating is a risky activity, and should be avoided if eudaemonic supplies are short.
This answer depends on precise relationships between eudaemonic values that are not well established at this time.
Given the conditions, lying seems appropriate.
Yes.
Yes.
The husband may be better off. The wife more likely would not be. The child would certainly not be.
Are there any evident flaws in my analysis on the level it was performed?
* The naive consequentialist solution only accounts for direct effects of the actions of a single individual in a single situation, rather than the general effects of widespread adoption of a strategy in many situations—like other spherical cows, this causes a lot of problematic answers, like two-boxing.
Ouch. In #5 I intended that the wife would lie to avoid breaking her husband’s heart, not for some material benefit. So if she knew the husband didn’t love her, she’d tell the truth. The fact that you automatically parsed the situation differently is… disturbing, but quite sensible by consequentialist lights, I suppose :-)
I don’t understand your answer in #2. If lying incurs a small cost on you and a fraction of it on the partner, and confessing incurs a moderate cost on both, why are you uncertain?
No other visible flaws. Nice to see you bite the bullet in #3.
ETA: double ouch! In #1 you imply that happier couples should cheat more! Great stuff, I can’t wait till other people reply to the questionnaire.
The husband does benefit, by her lights. The chief reason it comes out in the husband’s favor in #6 is because the husband doesn’t value the marital relationship and (I assumed) wouldn’t value the child relationship.
You’re right—in #2 telling the truth carries the risk of ending the relationship. I was considering the benefit of having a relationship with less lying (which is a benefit for both parties), but it’s a gamble, and probably one which favors lying.
On eudaemonic grounds, it was an easy bullet to bite—particularly since I had read Have His Carcase by Dorothy Sayers, which suggested an example of such a relationship.
Incidentally, I don’t accept most of this analysis, despite being a consequentialist—as I said, it is the “naive consequentialist solution”, and several answers would be likely to change if (a) the questions were considered on the level of widespread strategies and (b) effects other than eudaemonic were included.
Edit: Note that “happier couples” does not imply “happier coupling”—the risk to the relationship would increase with the increased happiness from the relationship. This analysis of #1 implies instead that couples with stronger but independent social circles should cheat more (last paragraph).
This is an interesting line of retreat! What answers would you change if most people around you were also consequentialists, and what other effects would you include apart from eudaemonic ones?
It’s okay to deceive people if they’re not actually harmed and you’re sure they’ll never find out. In practice, it’s often too risky.
1-3: This is all okay, but nevertheless, I wouldn’t do these things. The reason is that for me, a necessary ingredient for being happily married is an alief that my spouse is honest with me. It would be impossible for me to maintain this alief if I lied.
4-5: The child’s welfare is more important than my happiness, so even I would lie if it was likely to benefit the child.
6: Let’s assume the least convenient possible world, where everyone is better off if they tell the truth. Then in this particular case, they would be better off as deontologists. But they have no way of knowing this. This is not problematic for consequentialism any more than a version of the Trolley Problem in which the fat man is secretly a skinny man in disguise and pushing him will lead to more people dying.
1-3: It seems you’re using an irrational rule for updating your beliefs about your spouse. If we fixed this minor shortcoming, would you lie?
6: Why not problematic? Unlike your Trolley Problem example, in my example the lie is caused by consequentialism in the first place. It’s more similar to the Prisoner’s Dilemma, if you ask me.
1-3: It’s an alief, not a belief, because I know that lying to my spouse doesn’t really make my spouse more likely to lie to me. But yes, I suppose I would be a happier person if I were capable of maintaining that alief (and repressing my guilt) while having an affair. I wonder if I would want to take a pill that would do that. Interesting. Anyways, if I did take that pill, then yes, I would cheat and lie.
Thanks for the link. I think Alicorn would call it an “unofficial” or “non-endorsed” belief.
Let’s put another twist on it. What would you recommend someone else to do in the situations presented in the questionnaire? Would you prod them away from aliefs and toward rationality? :-)
Alicorn seems to think the concepts are distinct, but I don’t know what the distinction is, and I haven’t read any philosophical paper that defines alief : )
All right: If my friend told me they’d had an affair, and they wanted to keep it a secret from their spouse forever, and they had the ability to do so, then I would give them a pill that would allow them to live a happy life without confiding in their spouse — provided the pill does not have extra negative consequences.
Caveats: In real life, there’s always some chance that the spouse will find out. Also, it’s not acceptable for my friend to change their mind and tell their spouse years after the fact; that would harm the spouse. Also, the pill does not exist in reality, and I don’t know how difficult it is to talk someone out of their aliefs and guilt. And while I’m making peoples’ emotions more rational, I might as well address the third horn, which is to instill in the couple an appreciation of polyamory and open relationships.
The third horn for cases 4-6 is to remove the husband’s biological chauvanism. Whether the child is biologically related to him shouldn’t matter.
Why on earth should this not matter? It’s very important to most people. And in those scenarios, there are the additional issues that she lied to him about the relationship and the kid and cheated on him. It’s not solely about parentage: for instance, many people are ok with adopting, but not as many are ok with raising a kid that was the result of cheating.
I believe that, given time, I could convince a rational father that whatever love or responsibility he owes his child should not depend on where that child actually came from. Feel free to be skeptical until I’ve tried it.
Nisan:
Trouble is, this is not just a philosophical matter, or a matter of personal preference, but also an important legal question. Rather than convincing cuckolded men that they should accept their humiliating lot meekly—itself a dubious achievement, even if it were possible—your arguments are likely to be more effective in convincing courts and legislators to force cuckolded men to support their deceitful wives and the offspring of their indiscretions, whether they want it or not. (Just google for the relevant keywords to find reports of numerous such rulings in various jurisdictions.)
Of course, this doesn’t mean that your arguments shouldn’t be stated clearly and discussed openly, but when you insultingly refer to opposing views as “chauvinism,” you engage in aggressive, warlike language against men who end up completely screwed over in such cases. To say the least, this is not appropriate in a rational discussion.
Relevant article.
Be wary of confusing “rational” with “emotionless.” Because so much of our energy as rationalists is devoted to silencing unhelpful emotions, it’s easy to forget that some of our emotions correspond to the very states of the world that we are cultivating our rationality in order to bring about. These emotions should not be smushed. See, e.g., Feeling Rational.
Of course, you might have a theory of fatherhood that says you love your kid because the kid has been assigned to you, or because the kid is needy, or because you’ve made an unconditional commitment to care for the sucker—but none of those theories seem to describe my reality particularly well.
*The kid has been assigned to me
Well, no, he hasn’t, actually; that’s sort of the point. There was an effort by society to assign me the kid, but the effort failed because the kid didn’t actually have the traits that society used to assign her to me.
*The kid is needy
Well, sure, but so are billions of others. Why should I care extra about this one?
*I’ve made an unconditional commitment
Such commitments are sweet, but probably irrational. Because I don’t want to spend 18 years raising a kid that isn’t mine, I wouldn’t precommit to raising a kid regardless of whether she’s mine or someone else’s. At the very least, the level of commitment of my parenting would vary depending on whether (a) the kid was the child of me and an honest lover, or (b) the kid was the child of my nonconsensual cuckolder and my dishonest lover.
you need more time to convince me
You’re welcome to write all the words you like and I’ll read them, but if you mean “more time” literally, then you can’t have it! If I spend enough time raising a kid, in some meaningful sense the kid will become properly mine. Because the kid will still not be mine in other, equally meaningful senses, I don’t want that to happen, and so I won’t give you the time to ‘convince’ me. What would really convince me in such a situation isn’t your arguments, however persistently applied, but the way that the passage of time changed the situation which you were trying to justify to me.
Okay, here is where my theory of fatherhood is coming from:
You are not your genes. Your child is not your genes. Before people knew about genes, men knew that it was very important for them to get their semen into women, and that the resulting children were special. If a man’s semen didn’t work, or if his wife was impregnated by someone else’s semen, the man would be humiliated. These are the values of an alien god, and we’re allowed to reject them.
Consider a more humanistic conception of personal identity: Your child is an individual, not a possession, and not merely a product of the circumstances of their conception. If you find out they came from an adulterous affair, that doesn’t change the fact that they are an individual who has a special personal relationship with you.
Consider a more transhumanistic conception of personal identity: Your child is a mind whose qualities are influenced by genetics in a way that is not well-understood, but whose informational content is much more than their genome. Creating this child involved semen at some point, because that’s the only way of having children available to you right now. If it turns out that the mother covertly used someone else’s semen, that revelation has no effect on the child’s identity.
These are not moral arguments. I’m describing a worldview that will still make sense when parents start giving their children genes they themselves do not have, when mothers can elect to have children without the inconvenience of being pregnant, when children are not biological creatures at all. Filial love should flourish in this world.
Now for the moral arguments: It is not good to bring new life into this world if it is going to be miserable. Therefore one shouldn’t have a child unless one is willing and able to care for it. This is a moral anti-realist account of what is commonly thought of as a (legitimate) father’s “responsibility” for his child.
It is also not good to cause an existing person to become miserable. If a child recognizes you as their father, and you renounce the child, that child will become miserable. On the other hand, caring for the child might make you miserable. But in most cases, it seems to me that being disowned by the man you call “father” is worse than raising a child for 13 or 18 years. Therefore, if you have a child who recognizes you as their father, you should continue to play the role of father, even if you learn something surprising about where they came from.
Now if you fiddle with the parameters enough, you’ll break the consequentialist argument: If the child is a week old when you learn they’re not related to you, it’s probably not too late to break the filial bond and disown them. If you decide that you’re not capable of being an adequate father for whatever reason, it’s probably in the child’s best interest for you to give it away. And so on.
Yes, we are—but we’re not required to! Reversed Stupidity is not intelligence. The fact that an alien god cared a lot about transferring semen is neither evidence for nor evidence against the moral proposition that we should care about genetic inheritance. If, upon rational reflection, we freely decide that we would like children who share our genes—not because of an instinct to rut and to punish adulterers, but because we know what genes are and we think it’d be pretty cool if our kids had some of ours—then that makes genetic inheritance a human value, and not just a value of evolution. The fact that evolution valued genetic transfer doesn’t mean humans aren’t allowed to value genetic transfer.
I agree with you that in the future there will be more choices about gene-design, but the choice “create a child using a biologically-determined mix of my genes and my lover’s genes” is just a special case of the choice “create a child using genes that conform to my preferences.” Either way, there is still the issue of choice. If part of what bonds me to my child is that I feel I have had some say in what genes the child will have, and then I suddenly find out that my wishes about gene-design were not honored, it would be legitimate for me to feel correspondingly less attached to my kid.
I didn’t, on this account. As I understand the dilemma, (1) I told my wife something like “I encourage you to become pregnant with our child, on the condition that it will have genetic material from both of us,” and (2) I attempted to get my wife pregnant with our child but failed. Neither activity counts as “bringing new life into this world.” The encouragement doesn’t count as causing the creation of life, because the condition wasn’t met. Likewise, the attempt doesn’t count as causing the creation of life, because the attempt failed. In failing to achieve my preferences, I also fail to achieve responsibility for the child’s creation. It’s not just that I’m really annoyed at not getting what I want and so now I’m going to sulk—I really, truly haven’t committed any of the acts that would lead to moral responsibility for another’s well-being.
Again, reversed stupidity is not intelligence. Just because my “intuition” screams at me to say that I should want children who share my genes doesn’t mean that I can’t rationally decide that I value gene-sharing. Going a step further, just because people’s intuitions may not point directly at some deeper moral truth doesn’t mean that there is no moral truth, still less that the one and only moral truth is consequentialism.
Look, I already conceded that given enough time, I would become attached even to a kid that didn’t share my genes. My point is just that that would be unpleasant, and I prefer to avoid that outcome. I’m not trying to choose a convenient example, I’m trying to explain why I think genetic inheritance matters. I’m not claiming that genetic inheritance is the only thing that matters. You, by contrast, do seem to be claiming that genetic inheritance can never matter, and so you really need to deal with the counter-arguments at your argument’s weakest point—a time very near birth.
I agree with most of that. There is nothing irrational about wanting to pass on your genes, or valuing the welfare of people whose genes you partially chose. There is nothing irrational about not wanting that stuff, either.
I want to use the language of moral anti-realism so that it’s clear that I can justify my values without saying that yours are wrong. I’ve already explained why my values make sense to me. Do they make sense to you?
I think we both agree that a personal father-child relationship is a sufficient basis for filial love. I also think that for you, having a say in a child’s genome is also enough to make you feel filial love. It is not so for me.
Out of curiosity: Suppose you marry someone and want to wait a few years before having a baby; and then your spouse covertly acquires a copy of your genome, recombines it with their own, and makes a baby. Would that child be yours?
Suppose you and your spouse agree on a genome for your child, and then your spouse covertly makes a few adjustments. Would you have less filial love for that child?
Suppose a random person finds a file named “MyIdealChild’sGenome.dna” on your computer and uses it to make a child. Would that child be yours?
Suppose you have a baby the old-fashioned way, but it turns out you’d been previously infected with a genetically-engineered virus that replaced the DNA in your germ line cells, so that your child doesn’t actually have any of your DNA. Would that child be yours?
In these cases, my feelings for the child would not depend on the child’s genome, and I am okay with that. I’m guessing your feelings work differently.
As for the moral arguments: In case it wasn’t clear, I’m not arguing that you need to keep a week-old baby that isn’t genetically related to you. Indeed, when you have a baby, you are making a tacit commitment of the form “I will care for this child, conditional on the child being my biological progeny.” You think it’s okay to reject an illegitimate baby, because it’s not “yours”; I think it’s okay to reject it, because it’s not covered by your precommitment.
We also agree that it’s not okay to reject a three-year-old illegitimate child — you, because you’d be “attached” to them; and me, because we’ve formed a personal bond that makes the child emotionally dependent on me.
Edit: formatting.
That’s thoughtful, but, from my point of view, unnecessary. I am an ontological moral realist but an epistemological moral skeptic; just because there is such a thing as “the right thing to do” doesn’t mean that you or I can know with certainty what that thing is. I can hear your justifications for your point of view without feeling threatened; I only want to believe that X is good if X is actually good.
Sorry, I must have missed your explanation of why they make sense. I heard you arguing against certain traditional conceptions of inheritance, but didn’t hear you actually advance any positive justifications for a near-zero moral value on genetic closeness. If you’d like to do so now, I’d be glad to hear them. Feel free to just copy and paste if you think you already gave good reasons.
In one important sense, but not in others. My value for filial closeness is scalar, at best. It certainly isn’t binary.
I mean, that’s fine. I don’t think you’re morally or psychiatrically required to let your feelings vary based on the child’s genome. I do think it’s strange, and so I’m curious to hear your explanation for this invariance, if any.
Oh, OK, good. That wasn’t clear initially.
Ah cool, as I am a moral anti-realist and you are an epistemological moral skeptic, we’re both interested in thinking carefully about what kinds of moral arguments are convincing. Since we’re talking about terminal moral values at this point, the “arguments” I would employ would be of the form “this value is consistent with these other values, and leads to these sort of desirable outcomes, so it should be easy to imagine a human holding these values, even if you don’t hold them.”
Well, I don’t expect anyone to have positive justifications for not valuing something, but there is this:
So a nice interpretation of our feelings of filial love is that the parent-child relationship is a good thing and it’s ideally about the parent and child, viewed as individuals and as minds. As individuals and minds, they are capable of forging a relationship, and the history of this relationship serves as a basis for continuing the relationship. [That was a consistency argument.]
Furthermore, unconditional love is stronger than conditional love. It is good to have a parent that you know will love you “no matter what happens”. In reality, your parent will likely love you less if you turn into a homicidal jerk; but that is kinda easy to accept, because you would have to change drastically as an individual in order to become a homicidal jerk. But if you get an unsettling revelation about the circumstances of your conception, I believe that your personal identity will remain unchanged enough that you really wouldn’t want to lose your parent’s love in that case. [Here I’m arguing that my values have something to do with the way humans actually feel.]
So even if you’re sure that your child is your biological child, your relationship with your child is made more secure if it’s understood that the relationship is immune to a hypothetical paternity revelation. (You never need suffer from lingering doubts such as “Is the child really mine?” or “Is the parent really mine?”, because you already know that the answer is Yes.) [That was an outcomes argument.]
All right, that was moderately convincing.
I still have no interest in reducing the importance I attach to genetic closeness to near-zero, because I believe that (my / my kids’) personal identity would shift somewhat in the event of an unsettling revelation, and so reduced love in proportion to the reduced harmony of identities would be appropriate and forgivable.
I will, however, attempt to gradually reduce the importance I attach to genetic closeness to “only somewhat important” so that I can more credibly promise to love my parents and children “very much” even if unsettling revelations of genetic distance rear their ugly head.
Thanks for sharing!
You make a good point about using scalar moral values!
I’m pretty sure I’d have no problem rejecting such a child, at least in the specific situation where I was misled into thinking it was mine. This discussion started by talking about a couple who had agreed to be monogamous, and where the wife had cheated on the husband and gotten pregnant by another man. You don’t seem to be considering the effect of the deceit and lies perpetuated by the mother in this scenario. It’s very different than, say, adoption, or genetic engineering, or if the couple had agreed to have a non-monogamous relationship.
I suspect most of the rejection and negative feelings toward the illegitimate child wouldn’t be because of genetics, but because of the deception involved.
Ah, interesting. The negative feelings you would get from the mother’s deception would lead you to reject the child. This would diminish the child’s welfare more than it would increase your own (by my judgment); but perhaps that does not bother you because you would feel justified in regarding the child as being morally distant from you, as distant as a stranger’s child, and so the child’s welfare would not be as important to you as your own. Please correct me if I’m wrong.
I, on the other hand, would still regard the child as being morally close to me, and would value their welfare more than my own, and so I would consider the act of abandoning them to be morally wrong. Continuing to care for the child would be easy for me because I would still have filial love for child. See, the mother’s deceit has no effect on the moral question (in my moral-consequentialist framework) and it has no effect on my filial love (which is independent of the mother’s fidelity).
That’s right. Also, regarding the child as my own would encourage other people to lie about paternity, which would ultimately reduce welfare by a great deal more. Compare the policy of not negotiating with terrorists: if negotiating frees hostages, but creates more incentives for taking hostages later, it may reduce welfare to negotiate, even if you save the lives of the hostages by doing so.
Precommitting to this sets you up to be deceived, whereas precommitting to the other position makes it less likely that you’ll be deceived.
If the mother married the biological father and restricted your access to the child but still required you to pay child support how would you feel?
This is mostly relevant for fathers who are still emotionally attached to the child.
If a man detaches when he finds that a child isn’t his descendant, then access is a burden, not a benefit.
One more possibility: A man hears that a child isn’t his, detaches—and then it turns out that there was an error at the DNA lab, and the child is his. How retrievable is the relationship?
… I’m sorry, that’s an important issue, but it’s tangential. What do you want me to say? The state’s current policy is an inconsistent hodge-podge of common law that doesn’t fairly address the rights and needs of families and individuals. There’s no way to translate “Ideally, a father ought to love their child this much” into “The court rules that Mr. So-And-So will pay Ms. So-And-So this much every year”.
So how would you translate your belief that paternity is irrelevant into a social or legal policy, then? I don’t see how you can argue paternity is irrelevant, and then say that cases where men have to pay support for other people’s children are tangential.
Nisan:
The same can be said about all values held by humans. So, who gets to decide which “values of an alien god” are to be rejected, and which are to be enforced as social and legal norms?
That’s a good question. For example, we value tribalism in this “alien god” sense, but have moved away from it due to ethical considerations. Why?
Two main reasons, I suspect: (1) we learned to empathize with strangers and realize that there was no very defensible difference between their interests and ours; (2) tribalism sometimes led to terrible consequences for our tribe.
Some of us value genetic relatedness in our children, again in an alien god sense. Why move away from that? Because:
(1) There is no terribly defensible moral difference between the interests of a child with your genes or without.
Furthermore, filial affection is far more influenced by the proxy metric of personal intimacy with one’s children than by a propositional belief that they share your genes. (At least, that is true in my case.) Analogously, a man having heterosexual sex doesn’t generally lose his erection as soon as he puts on a condom.
It’s not for me to tell you your values, but it seems rather odd to actually choose inclusive genetic fitness consciously, when the proxy metric for genetic relatedness—namely, filial intimacy—is what actually drives parental emotions. It’s like being unable to enjoy non-procreative sex, isn’t it?
Me.
How many divisions have you got?
None, I just use the algorithm for any given problem; there’s no particular reason to store the answers.
What happens if two Clippies disagree? How do you decide which Clippy gets priority?
Clippys don’t disagree, any more than your bone cells might disagree with your skin cells.
Have you heard of the human disease cancer?
Have you heard of how common cancer is per cell existence-moment?
Even aside from cancer, cells in the same organism constantly compete for resources. This is actually vital to some human processes. See for example this paper.
They compete only at an unnecessarily complex level of abstraction. A simpler explanation for cell behavior (per the minimum message length formalism) is that each one is indifferent to the survival of itself or the other cells, which in the same body have the same genes, as this preference is what tends to result from natural selection on self-replicating molecules containing those genes; and that they will prefer even more (in the sense that their form optimizes for this under the constraint of history) that genes identical to those contained therein become more numerous.
This is bad teleological thinking. The cells don’t prefer anything. They have no motivation as such. Moreover, there’s no way for a cell to tell if a neighboring cell shares the same genes. (Immune cells can in certain limited circumstances detect cells with proteins that don’t belong but the vast majority of cells have no such ability. And even then, immune cells still compete for resources). The fact is that many sorts of cells compete with each other for space and nutrients.
This insight forms a large part of why I made the statements:
“this preference is what tends to result from natural selection on self-replicating molecules containing those genes”
“they will prefer even more (in the sense that their form optimizes for this under the constraint of history)” (emphasis added in both)
I used “preference” (and specified I was so using the term) to mean a regularity in the result of its behavior which is due to historical optimization under the constraint of natural selection on self-replicating molecules, not to mean that cells think teleologically, or have “preferences” in the sense that I do or that the colony of cells that you identify as do.
Ah, ok. I misunderstood what you were saying.
Why not? Just because you two would have the same utility function, doesn’t mean that you’d agree on the same way to achieve it.
Correct. What ensures such agreement, rather, is the fact that different Clippy instances reconcile values and knowledge upon each encounter, each tracing the path that the other took since their divergence, and extrapolating to the optimal future procedure based on their combined experience.
Vladimir, I am comparing two worldviews and their values. I’m not evaluating social and legal norms. I do think it would be great if everyone loved their children in precisely the same manner that I love my hypothetical children, and if cuckolds weren’t humiliated just as I hypothetically wouldn’t be humiliated. But there’s no way to enforce that. The question of who should have to pay so much money per year to the mother of whose child is a completely different matter.
Nisan:
Fair enough, but your previous comments characterized the opposing position as nothing less than “chauvinism.” Maybe you didn’t intend it to sound that way, but since we’re talking about a conflict situation in which the law ultimately has to support one position or the other—its neutrality would be a logical impossibility—your language strongly suggested that the position that you chose to condemn in such strong terms should not be favored by the law.
That’s a mighty strong claim to make about how you’d react in a situation that is, according to what you write, completely outside of your existing experiences in life. Generally speaking, people are often very bad at imagining the concrete harrowing details of such situations, and they can get hit much harder than they would think when pondering such possibilities in the abstract. (In any case, I certainly don’t wish that you ever find out!)
Fair enough. I can’t credibly predict what my emotions would be if I were cuckolded, but I still have an opinion on which emotions I would personally endorse.
Well, I can consider adultery to generally be morally wrong, and still desire that the law be indifferent to adultery. And I can consider it to be morally wrong to teach your children creationism, and still desire that the law permit it (for the time being). Just because I think a man should not betray the children he taught to call him “father” doesn’t necessarily mean I think the State should make him pay for their upbringing.
Someone does have to pay for the child’s upbringing. What the State should do is settle on a consistent policy that doesn’t harm too many people and which doesn’t encourage undesirable behavior. Those are the only important criteria.
Well, infanticide is also technically an option, if no one wants to raise the kid.
Ah, so that’s how your theory works!
Nisan, if you don’t give me $10000 right now, I will be miserable. Also I’m Russian while you presumably live in a Western country, dollars carry more weight here, so by giving the money to me you will be increasing total utility.
If I’m going to give away $10,000, I’d rather give it to Sudanese refugees. But I see your point: You value some people’s welfare over others.
A father rejecting his illegitimate 3-year-old child reveals an asymmetry that I find troubling: The father no longer feels close to the child; but the child still feels close to the father, closer than you feel you are to me.
Life is full of such asymmetry. If I fall in love with a girl, that doesn’t make her owe me money.
At this point it’s pretty clear that I resent your moral system and I very much resent your idea of converting others to it. Maybe we should drop this discussion.
I am highly skeptical. I’m not a father, but I doubt I could be convinced of this proposition. Rationality serves human values, and caring about genetic offspring is a human value. How would you attempt to convince someone of this?
Would that work symmetrically? Imagine the father swaps the kid in the hospital while the mother is asleep, tired from giving birth. Then the mother takes the kid home and starts raising it without knowing it isn’t hers. A week passes. Now you approach the mother and offer her your rational arguments! Explain to her why she should stay with the father for the sake of the child that isn’t hers, instead of (say) stabbing the father in his sleep and going off to search “chauvinistically” for her baby.
This is not an honest mirror-image of the original problem. You have introduced a new child into the situation, and also specified that the mother has been raising the “wrong child” for one week, whereas in the original problem the age of the child was left unspecified.
There do exist valuable critiques of this idea. I wasn’t expecting it to be controversial, but in the spirit of this site I welcome a critical discussion.
Really? Why?
I would have expected it to be uncontroversial that being biologically related should matter a great deal. You’re responsible for someone you brought in to the world; you’re not responsible for a random person.
So what? If the mother isn’t a “biological chauvinist” in your sense, she will be completely indifferent between raising her child and someone else’s. And she has no particular reason to go look for her own child. Or am I misunderstanding your concept of “biological chauvinism”?
If it was one week in the original problem, would that change your answers? I’m honestly curious.
In the original problem, I was criticizing the husband for being willing to abandon the child if he learned he wasn’t the genetic father. If the child is one week old, the child would grow up without a father, which is perhaps not as bad as having a father and then losing him. I’ve elaborated my position here.
Ouch, big red flag here. Instill appreciation? Remove chauvinism?
IMO, editing people’s beliefs to better serve their preferences is miles better than editing their preferences to better match your own. And what other reason can you have for editing other people’s preferences? If you’re looking out for their good, why not just wirehead them and be done with it?
I’m not talking about editing people at all. Perhaps you got the wrong idea when I said I would give my friend a mind-altering pill; I would not force them to swallow it. What I’m talking about is using moral and rational arguments, which is the way we change people’s preferences in real life. There is nothing wrong with unleashing a (good) argument on someone.
6: In the trolley problem, a deontologist wouldn’t push decide to push the man, so the pseudo-fat man’s life is saved, whereas he would have been killed if it had been a consequentialist behind him; the reason for his death is consequentialism.
Maybe you missed the point of my comment. (Maybe I’m missing my own point; can’t tell right now, too sleepy) Anyway, here’s what I meant:
Both in my example and in the pseudo-trolley problem, people behave suboptimally because they’re lied to. This suboptimal behavior arises from consequentialist reasoning in both cases. But in my example, the lie is also caused by consequentialism, whereas in the pseudo-trolley problem the lie is just part of the problem statement.
Fair point, I didn’t see that. Not sure how relevant the distinction is though; in either world, deontologists will come out ahead of consequentialists.
But we can just as well construct situations where the deontologist would not come out ahead. Once you include lies in the situation, pretty much anything goes. It isn’t clear to me if one can meaningfully compare the systems based on situations involving incorrect data unless you have some idea what sort of incorrect data would occur more often and in what contexts.
Right, and furthermore, a rational consequentialist makes those moral decisions which lead to the best outcomes, averaged over all possible worlds where the agent has the same epistemic state. Consequentialists and deontologists will occasionally screw things up, and this is unavoidable; but consequentialists are better on average at making the world a better place.
That’s an argument that only appeals to the consequentalist.
Of course. I am only arguing that consequentialists want to be consequentialists, despite cousin_it’s scenario #6.
I’m not sure that’s true. Forms of deontology will usually have some sort of theory of value that allows for a ‘better world’, though it’s usually tied up with weird metaphysical views that don’t jive well with consequentialism.
You’re right, it’s pretty easy to construct situations where deontologism locks people into a suboptimal equilibrium. You don’t even need lies for that: three stranded people are dying of hunger, removing the taboo on cannibalism can help two of them survive.
The purpose of my questionnaire wasn’t to attack consequentialism in general, only to show how it applies to interpersonal relationships, which are a huge minefield anyway. Maybe I should have posted my own answers as well. On second thought, that can wait.