General cultural norms label this practice as horrific, and most people’s gut reactions concur. But a good chunk of rationality is separating emotions from logic. Once you’ve used atheism to eliminate a soul, and humans are “just” meat machines, and abortion is an ok if perhaps regrettable practice … well, scientifically, there just isn’t all that much difference between a fetus a couple months before birth, and an infant a couple of months after.
This doesn’t argue that infants have zero value, but instead that they should be treated more like property or perhaps like pets (rather than like adult citizens). Don’t unnecessarily cause them to suffer, but on the other hand you can choose to euthanize your own, if you wish, with no criminal consequences.
Get one of your friends who claims to be a rationalist. See if they can argue passionately in favor of infanticide.
Once you’ve used atheism to eliminate a soul, and humans are “just” meat machines, and abortion is an ok if perhaps regrettable practice …
Kudos to you for forthrightness. But em… no. Ok, first, it seems to me you’ve swept the ethics of infanticide under the rug of abortion, and left it there mostly unaddressed. Is an abortion an “ok if regrettable practice?” You’ve just assumed the answer is always yes, under any circumstances.
I personally say “definitely yes” before brain development (~12 weeks I think), “you need to talk to your doctor” between 12 and 24 weeks, and “not unless it’s going to kill you” after 24 weeks (fully functioning brain). Anybody who knows more about development is welcome to contradict me, but those were the numbers I came up with a few years ago when I researched this.
If a baby/fetus has a mind, in my books it should be accorded rights—more and more so as it develops. I fail to see, moreover, where the dividing line ought to be in your view. Not to slippery-slope you but—why stop at infants?
*(Also note that this is a first-principles ethical argument which may have to be modified based on social expedience if it turns into policy. I don’t want to encourage botched amateur abortions and cause extra harm. But those considerations are separate from the question of whether infants have worth in a moral sense.)
Once you’ve used atheism to eliminate a soul, and humans are “just” meat machines...
This gave me a nasty turn, because probably the most annoying idea religious people have is that if we’re “just” chemicals, then nothing matters. One has to take pains to say that chemicals are just what we’re made of. We have to be made out of something! :) And what we’re made of has precisely zero moral significance (would we have more worth if we were made out of “spirit”?).
I mean, I could sit here all day and tell you about how you shouldn’t read “Moby Dick,” because it’s just a bunch of meaningless pigment squiggles on compressed wood pulp. In a certain very trivial sense I am absolutely right—there is no “élan de Moby Dick” floating out in the aether somewhere independent of physical books. On the other hand I am totally missing the point.
Is an abortion an “ok if regrettable practice?” You’ve just assumed the answer is always yes, under any circumstances.
Sorry, you have a point that my test won’t apply to every rationalist.
The contrast I meant was: if you look at the world population, and ask how many people believe in atheism, materialism, and that abortion is not morally wrong, you’ll find a significant minority. (Perhaps you yourself are not in that group.)
But if you then try to add “believes that infanticide is not morally wrong”, your subpopulation will drop to basically zero.
But, rationally, the gap between the first three beliefs, and the last one, is relatively small. Purely on the basis of rationality, you ought to expect a smaller dropoff than we in fact see. Hence, most people in the first group are avoiding the repugnant conclusion for non-rational reasons. (Or believing in the first three, for non-rational reasons.)
If you personally don’t agree with the first three premises, then perhaps this test isn’t accurate for you.
But, rationally, the gap between the first three beliefs, and the last one, is relatively small. Purely on the basis of rationality, you ought to expect a smaller dropoff than we in fact see. Hence, most people in the first group are avoiding the repugnant conclusion for non-rational reasons. (Or believing in the first three, for non-rational reasons.)
If a baby/fetus has a mind, in my books it should be accorded rights—more and more so as it develops. I fail to see, moreover, where the dividing line ought to be in your view. Not to slippery-slope you but—why stop at infants?
The standard answer is that at that point there is no longer a conflict with the rights of the women whose body the infant was hooked into. We don’t generally require that people give up their bodily autonomy to support the life of others.
We don’t generally require that people give up their bodily autonomy to support the life of others.
The complication here is that a responsible, consenting adult tacitly accepts giving up her bodily autonomy (or accepts a risk of doing so) when she has sex. That’s precisely the same reason men are required to pay child support even if they didn’t wish for a pregnancy. (Yes, I see the asymmetry; yes, it sucks).
Case-by-case reasoning is probably a good thing in these circs, but unless the mother was not informed (minor/mental illness) or did not consent, then the only really tenable reason for a late-term abortion I can think of is health. In which case the relative weighing of rights is a tricky business, a buck I will pass to doctors, patients & hospital ethics boards.
but unless the mother was not informed (minor/mental illness) or did not consent,
This is already a significant retreat from your previously stated position. (“not unless it’s going to kill you” after 24 weeks)
The complication here is that a responsible, consenting adult tacitly accepts giving up her bodily autonomy (or accepts a risk of doing so) when she has sex.
That’s a hell of an assertion. I don’t really see any reason to accept it as other than a normative statement of what you wish would happen.
That’s precisely the same reason men are required to pay child support even if they didn’t wish for a pregnancy. (Yes, I see the asymmetry; yes, it sucks).
As you say, there is an asymmetry. Garnishing a wage is a bit different, and seems appropriate to me.
Case-by-case reasoning is probably a good thing in these circs,
Yes, it is, so long as it is reasoning rather than assertions that this case is different. We have to specify how it is different, and how those differences make a difference. The easiest way for me to do this is to use analogies. This is dangerous of course, as one must keep in mind that they can ignore relevant differences while emphasizing surface similarities.
So, in this case the relevant specialness you’re calling out is that a risky activity was knowingly engaged in that created a person who needs life support for some time, as well as care and feeding far after that. So I’m going to try to set up an analogous situation, but without sex being the act (which I think is irrelevant) coming into the mix. This will also mean another difference: the person will not be “created” except metaphorically from a preëxisting person. I personally don’t see how that would be relevant, but I suppose it is possible for others to disagree.
Suppose a person is driving, and crashes into a pedestrian. This ruptures the liver of the pedestrian. A partial transplant of the driver’s liver will save the pedestrian’s life. Is the driver expected to donate their liver? Should it be required by law?
Note that the donor’s death rate for this operation is under 1%. When we compare this to the statistics for maternal death, we see it is similar to WHO’s 2005 estimate of world average of 900 per 100,000, though developed regions have it far lower at 9 per 100000.
This is already a significant retreat from your previously stated position. (“not unless it’s going to kill you” after 24 weeks)
Is it? I suppose it is. I contain multitudes. No, honestly, I just didn’t name all my caveats in the previous post (my bad). Clearly there are two people’s interests to take into consideration here. Also, as I noted, that was an ethical rather than legal argument. I don’t have any strong opinions about what the law should do wrt this question.
That’s a hell of an assertion. I don’t really see any reason to accept it as other than a normative statement of what you wish would happen.
I don’t think it’s unreasonable, although you’re right it’s not a fact statement. But I think it’s a fairly well-established principle of ethics & jurisprudence that informed consent implies responsibility. Nobody has to have unprotected sex, so if you (a consenting adult) do so, any reasonably foreseeable consequences are on your shoulders.
Suppose a person is driving, and crashes into a pedestrian. This ruptures the liver of the pedestrian. A partial transplant of the driver’s liver will save the pedestrian’s life. Is the driver expected to donate their liver? Should it be required by law?
It’s a reasonably good analogy I guess. There are two separate questions here: what should the law do, and what should the driver do. I don’t think anybody wants the law to require organ donations from people who behave irresponsibly. However, put in the driver’s shoes, and assuming the collision was my fault, I would feel obligated to donate (if, in this worst-case scenario, I am the only one who can).
There is a slight disanalogy here though, which is that an abortion is an act, whereas a failure to donate is an omission. It’s like the difference between throwing the fat guy on the tracks and just letting the train hit the fat guy.
which is that an abortion is an act, whereas a failure to donate is an omission
I’m curious to the reasoning on what the difference is, except maybe that, no better options being available (it seems) we use omission as the default strategy when consequences are not within our grasp (as watching and gathering more information will at least not worsen your later ability to come to a conclusion, with the only caveat that then it may be too late to act).
“Suppose a person is driving, and crashes into a pedestrian. This ruptures the liver of the pedestrian. A partial transplant of the driver’s liver will save the pedestrian’s life. Is the driver expected to donate their liver? Should it be required by law?”
For organ transplantations, the body biochemistries of the organ donor and acceptor must be somewhat compatible, otherwise the transplanted organ gets rejected by the immune system of the acceptor. The best transplantation results are between the identical twins. For unrelated people, there are tests to estimate the compatibility of organs, and databases. A conclusion: The driver is not generally expected to donate their liver, because in the majority of the cases, it would not help the victim.
Imagine an alternate universe, where all the human bodies are highly compatible for transplantation purposes.
Yes, I believe it might become a social norm in this alternate universe, or even a law, that the driver must donate their liver to the victim.
Suppose a person is driving, and crashes into a pedestrian. This ruptures the liver of the pedestrian. A partial transplant of the driver’s liver will save the pedestrian’s life. Is the driver expected to donate their liver. Should it be required by law?
This depends mostly upon whether you think that law should enforce doing actions which save lives with insignificant risk to the actor.
If yes, then this (quite special) case is clear-cut, given a few assumptions (liver matches and is healthy, is not already scheduled for another similarly important surgery, etc. etc.). However, at least as far as I know, this is not the case.
And I doubt it will be soon (simply did not think about whether it should yet). Just an example: In Austria by default all deceased people are potential donors—you have to file an explicit opt-out. This is quite different than for instance in Germany. Therefore we have a relatively good “source” of organs. However, though sometimes under discussion, Germany has not changed its legislation, even with the possibility to compare the numbers. Maybe for religious reasons, or freedom of whomever. I didn’t follow it that close...
If such simple matters (we are talking about already medically dead persons) do not change within years, what can be expected for such, really fundamental, decisions?
Just an example: In Austria by default all deceased people are potential donors—you have to file an explicit opt-out.
I am very much in favour of this sort of policy; it would do no end of good.
The effect of pretending to have opt-out organ donation is small. Austria is unique in really having opt-out organ donation (everywhere else, next of kin decide in practice), so it’s hard to judge the effect, but it’s not an outlier. In the 90s, Spain became the high outlier and Italy ceased being the low outlier, so rapid change is possible without doing anything ethically sensitive. graph. More Kieran Healy links here.
“Reform of the rules governing consent is often accompanied by an overhaul and improvement of the logistical system, and it is this—not the letter of the law—that makes a difference. Cadaveric organ procurement is an intense, time-sensitive and very fluid process that requires a great deal of co-ordination and management. Countries that invest in that layer of the system do better than others, regardless of the rules about presumed and informed consent.”
In our country, we have an opt-out donation, but I guess the relatives can have a veto. I have seen a physician on TV, who said some scary things openly. Our doctors are standardly overworked and underpayed. Imagine a doctor, who, towards the end of the long shift, sees a patient dying with some of the organs intact. If he decides to report the availability of the organs, he creates an extra, several hours work for himself and others, paperwork included. There is either none or very little financial reward for reporting the organs, I do not remember exactly. They might feel heroic for the first couple of times, but, eventually, they resign and stop making these reports, after they work long enough.
I have seen this on TV cca 3 years ago, do not know the current situation.
The driver could instead be made responsible for the victim’s exact medical costs or some fraction thereof, in addition to any punitive or approximated damages. This would provide adequate incentive to seek out ways to reduce those costs, including but not limited to a voluntary donation on the part of the driver or someone who owes the driver a favor.
In the abortion example, the fetus 1) is created already attached and ending ongoing life support may not be the same as requiring that someone who is not providing it provide it, 2) needs life support for an extended period, and 3) can only use the life support of one person.
The complication here is that a responsible, consenting adult tacitly accepts giving up her bodily autonomy (or accepts a risk of doing so) when she has sex.
The complication there is that on the standard view, one cannot give up one’s bodily autonomy permanently. You cannot sell yourself into slavery. The pregnant person always has the right to opt-out of the contract.
Though the fetus would presumably be able to get damages. I guess those get paid to the next-of-kin.
We don’t generally require that people give up their bodily autonomy to support the life of others.
We don’t?
In what situation, exactly, do we fail to do this? I can’t think of any other real-world situation. I can imagine counterfactual ones, sure, but I’m fairly certain most people see those as analogies for abortion and respond appropriately.
We don’t, for instance, require people to donate redundant organs, nor even blood. Nor is organ donation mandatory even after death (prehaps it should be).
What are some cases where we do require people to give up their bodily autonomy?
That’s the big one I can think of, and this usually arises in a very different context where it’s easy to dehumanize those forced to take such tests: alleged criminals and children.
(Even in these contexts, peeing in a cup or taking a breathalyzer is quite a bit less severe than enduring a forced pregnancy. Mandatory blood draws for DUIs do upset a signifianct number of people. How you feel about employment tests and sports doping might depend on how you feel about economic coercion and whether it’s truly “mandatory”.)
When one chooses subjective experience of pain and pleasure as one basic necessity for the privilege of taken into account when deciding moral matters, and if one assumes that this privilege is only gradually applicable (i.e. the pain/pleasure experience of a dog is less vivid than that of a human, etc.), than the immediate right/wrongfulness of an action like abortion/infanticide with regard to the fetus/baby should correlate to similar decisions on pets.
simplicio:
I personally say “definitely yes” before brain development (~12 weeks I think), “you need to talk to your doctor” between 12 and 24 weeks, and “not unless it’s going to kill you” after 24 weeks (fully functioning brain).
But, if, as I think, we also have a common ground by preferring consequentialist ethics, which also more or less leads to resolve “omission vs. act” as both being similary morally active, then one has to take into account that an abortion or infanticide will make it impossible for this person to develop, whereas a dog will never by itself, however long you wait, suddenly develop the vivid subjective experience of a human.
And then you have to take into account that consequentialism demands to take more factors into account, like the increase of bad-practice abortions and increased mental stress for many people.
DonGeddis:
Once you’ve used atheism to eliminate a soul, and humans are “just” meat machines, and abortion is an ok if perhaps regrettable practice …
However, if you do take those matters into account, then the conclusion is not “bad, but OK because of some reasons we do not like”, but simply “OK”. Or not. Whatever conclusion you may come. And yes, it would probably a case-by-case decision. Extremely complicated, and given the nature of human thought probably more open to manipulation than one would like.
Then, when we have failed to simplify the method to determine the consequences, we fall back to a “practical simplification”, and here a common line of thinking is: Well, there may not be a sharp line between a fetus and a newborn, but we have exactly one criterium we can count on (birth), and it is sufficiently similar to the “real thing” one can use this metric without having too much of a problem. And yes, it works, in practice, not too bad (when compared with other legislations).
Very much agreed. This is also why we place much more moral value in the life of a severely brain-damaged human than a more intelligent non-human primate.
Despite some jokes I made earlier, things that could arguably depend on values don’t make good litmus tests. Though I did at one point talk to someone who tried to convert me to vegetarianism by saying that if I was willing to eat pork, it ought to be okay to eat month-old infants too, since the pigs were much smarter. I’m pretty sure you can guess where that conversation went...
Option zero: “There’s an interesting story I once wrote...”
Option one: “Well then, I won’t/don’t eat pork. But that doesn’t mean I won’t eat any animals. I can be selective in which I eat.”
Option two: “mmmmm… babies.”
Option three: “Why can’t I simply not want to eat babies? I can simply prefer to eat pigs and not babies”
Option four: “Seems like a convincing argument to me. Okay, vegetarian now.” (after all, technically you said they tried, but you didn’t say the failed. ;))
Option five: “actually, I already am one.”
Am I missing any (somewhat) plausible branches it could have taken? More to the point, is one of the above the direction it actually went? :)
(My model of you, incidentally, suggests option three as your least likely response and option one as your most likely serious response.)
Well, not quite option two, but yes, “You make a convincing case that it should be legal to eat month-old infants.” One person’s modus ponens is another’s modus tollens...
Option six: “I was a vegetarian, but I’m okay with eating babies, and if pigs are just as smart, it should be okay to eat them too, so you’ve convinced me to give up vegetarianism.”
This reminds me of the elves in Dwarf Fortress. They eat people, but not animals.
I’m imagining this conversation while you’re both holding menus...
In seriousness, there are good instrumental reasons not to allow people to eat month-old infants that are nothing to do with greatly valuing them in your terminal values.
That guy clearly asked you those questions in the wrong order.
Do you believe killing animals for food is OK?
Killing animals for food is the same as eating babies!
Do you believe killing babies for food is OK?
… is obviously going to activate biases leading to the defense of killing animals for food, whether by denying they are equivalent or claiming to accept killing children for food. Thus the chance of persuading someone eating babies is morally acceptable depends on how strongly you argue the second point.
However...
Do you believe killing babies for food is OK?
Killing animals for food is the same as eating babies!
Do you believe killing animals for food is OK?
… leads to the opposite bias, as if the listener cannot refute your second point they must convert to vegetarianism or visibly contradict themselves.
It isn’t a question of current intelligence, it’s a question of potential. Pigs will never grow beyond human-infant-level comprehension. Human babies will eventually become both sapient and sentient.
Saying a baby and a pig can be considered equally intelligent is like saying a midget and an 11-year-old of the same height are equally likely to become basketball players.
Doesn’t this depend on whether one is referring to fluid intelligence or crystal intelligence? Human babies may have the same crystal intelligence as adult pigs, but they have much higher fluid intelligence.
I think what happened here is that the vegetarian failed to realize that the component of intelligence that people find morally significant is fluid, not crystal, and then he equivocated between the two. EY realized what was going on, even if subconsciously, which is why he trolled the vegetarian instead of disputing his premise. Finally, Fallible failed to pick up on the distinction entirely by assuming that “intelligence” always refers to fluid intelligence.
The regrettability of abortion is connected to the availability of birth control, and so similarly, the regrettability of infanticide should be connected to the availability of abortion. A key difference is that while birth control may fail, abortion basically doesn’t. I can think of a handful of reasons for infanticide to make sense when abortion didn’t, and they’re all related to things like unexpected infant disability the parents aren’t prepared to handle, or sudden, badly timed, unanticipated financial/family stability disasters.
In either case, given that the baby doesn’t necessarily occupy privileged uterine real estate the way a fetus must, I think it makes sense to push adoption as strongly preferred recourse before infanticide reaches the top of the list. Unlike asking a woman who wants an abortion to have the baby and give it up for adoption, this imposes no additional cost on her relative to the alternative.
Additionally, I think any but the most strongly controlled permission for infanticide would lead to cases where one parent killed their baby over the desire of the other parent to keep it. It seems obvious to me that either parent’s wish that the baby live—assuming they’re willing to raise it or give it up for adoption, and don’t just vaguely prefer that it continue being alive while the wants-it-dead parent deal with its actual care—should be a sufficient condition that it live. I might even extend this to other relatives.
Basically, this is a variant on the argument from marginal cases; infants don’t differ from relatively intelligent nonhuman animals in capabilities, so they ought to have the same moral status. If it’s okay to euthanize your dog, it should also be okay to euthanize your newborn.
(The most common use of the argument from marginal cases is to argue that animals deserve greater moral consideration, and not that some humans deserve less, but one man’s modus ponens is another man’s modus tollens.)
(The most common use of the argument from marginal cases is to argue that animals deserve greater moral consideration, and not that some humans deserve less, but one man’s modus ponens is another man’s modus tollens.)
Cerca 1792 after Wollstonecrafts A Vindication of the Rights of Women a philosopher name Thomas Taylor published a reductio ad absurdum/ parody entitled A Vindication of the Rights of Brutes which basically took Wollstonecrafts arguments for more gender equality and replaced women with animals. It reads more or less like an animal rights pamphlet written by Peter Singer.
Professor Mordin Solus solves marginal cases by refusing to experiment on any species with at least one member capable of Calculus, which is a bit different from criticism, “argument from species normality.”
That sounds like a reasonable conclusion—compared to an intelligence capable enough of introspection and planning to make a friendly AI, the overwhelming majority of my actions arise purely from unreasoning instinct.
Any species with at least one member who has demonstrated to humans the capability of doing calculus as per human notions of “doing calculus”.
I don’t remember the source, but I read a fiction somewhere in which an alien observed a few children playing catch. The alien commented on how impressed it was that they could do such sophisticated calculations so quickly at such a young age.
Your parenthetical comment is the funniest thing I’ve read all day! The contrast with the seriousness of subject matter is exquisite. (You’re of course right about the marginal cases thing too.)
(The most common use of the argument from marginal cases is to argue that animals deserve greater moral consideration, and not that some humans deserve less, but one man’s modus ponens is another man’s modus tollens.)
This is a hand, this is an inviolate right to life...
That’s an amusing example because infanticide was extremely common among human cultures, so all good cultural relativists should be fine with this practice.
Usually there was a strong distinction between actually killing a baby (extremely wrong thing to do), and abandoning it to elements (acceptable). I’m not talking about any exotic cultures, ancient Greece and Rome and even large parts of Christian Medieval Europe practiced infant abandonment. There are even examples of Greek and Roman writers noting how strange it is that Egyptians and Jews never kill their children—perfect stuff for any cultural relativists. It was only once people switched from abandoning infants to elements to abandoning them at churches when it ceased being outright infanticide.
Anyway, pretty much the only reason babies are cute is as defense against abandonment. This shows it was never anything exceptional and was always a major evolutionary force. By some estimates up to 50% of all babies were killed or abandoned to certain death in Paleolithic societies (all such claims are highly speculative of course).
Infant abandonment is normal, and people should have the same right to abandon their babies as they always had. Especially since these days we just put them into orphanages. Choosing infanticide over abandonment is pretty pointless, so why do it?
Killing another living thing doesn’t qualify as “euthanasia” if you do it for your benefit, not that being’s.
By infant abandonment by giving it to an orphanage (it’s not legal everywhere, but in a lot of countries it’s perfectly legal and acceptable) you lose both your responsibility and your control over the baby, so you no longer have any right to do so.
And speaking of euthanasia, we really should seriously reban it. We pretty much know how to deal with even the most severe pain—very large doses of opiates to get rid of it, and large doses of stimulants like amphetamines to counter the side effects. War on Drugs is the reason why we don’t routinely do this to people in severe pain.
We don’t have a magical cure for depression, but if someone is depressed, they cannot make rational decisions for themselves anyway, so they cannot decide to kill themselves legitimately.
Once you cover these casese, there are zero legitimate arguments left for euthanasia.
“Choosing infanticide over abandonment is pretty pointless, so why do it?”
“Killing another living thing doesn’t qualify as “euthanasia” if you do it for your benefit, not that being’s.”
Let me respond by a little story telling, without making a clear point.
I am not proving You wrong, just sharing my personal experience.
Warnings: depressive stories about ilnesses, probably bad reading.
I once was a friend with a boy with a progressive muscular dystrophy. It is a degenerative disease, where gradually, Your muscles stop working, and at the age of cca 20, most patients die, because they stop breathing.
If You have heard great stories about people on the wheelchair getting adapted to their situation, well, here adaptation can be only shorterm, because next year, You might not be able of doing what you can do now. The pain was not excruciating but there was some, the body which is deprived of excercise gives You this feedback. If he had a bad dream at night, he could not turn to the other side (a very usual remedy, most people do it without even realizing). The boy had 2 suicide attempts, although, frankly, he did not really mean them. He would make phonecalls to his friends in the evening to relieve his pain—very unwelcome calls. I sometimes pretended not to be at home, and I know other people who did the same (We were in our twenties). Then, his desperation was deepened by feeling he is not loved.
Once he was calling his psychologist, and caught her in the middle of a suicide attempt, poisoned by drugs—she repeated to him HIS previous statements from the previous phonecalls. I am not saying it was HIS fault, the lady clearly failed to safeguard the known risks of her profession (plus had other problems, departed partner etc.) I am just illustrating how hard it was sometimes to deal with him. (He called other people who saved her life, to close up this branch of the story).
His parents took great care of him up to the level of their financial abilities, plus using the limited help of our government. There were frequent conflicts between him and his parents, though, and made him feel unloved, again. On the other hand, his parents were deeply religious and, knowingly, had another baby with the same genetic defect later, they did not choose abortion. The older boy has died at the age of 28, his life being surprisingly long.
This story clearly contains aspects, which were not optimized, the parents could have earned more money and bring more comforts to his lives, he could have gotten a personal assistant at night, more physiotherapy excercises, a better computer, some lectures how to deal with people and get a girlfriend (his desires were strong), he could have tried harder to develop his talents and get a job, which would make him feel useful to society. (We persuaded him to get a job eventually, phone operator, lasted 1 year or so). His friens, including me, could have worked harder on their emotional maturity. But, can You see all the energy and resources to make a misery somewhat better ?
“Choosing infanticide over abandonment is pretty pointless, so why do it?” Abandoning a baby with a severe genetic defect at birth condemnes the baby to even lower quality of life in most government institutions, unless a millionaire chooses to adopt him.
I have a counterargument to my own reasoning right away—what if some parents killed their baby diagnosed with adrenoleukodystrophy (but with no developed symptomps yet) a year before Augusto and Michaela Odone invented the Lorenzo’s Oil for their son ? Such parents would have lost a potentially healthy baby, the baby would lose a realistic chance to live their normal life...
I am not really trying to win this argument, just explaining, why I sometimes TOY with the idea of infanticide being not so immoral, and considering it a form of euthanasia.
There’s plenty of diseases we can now deal with quite well because we didn’t infanticide or murder everyone who had them. This isn’t a coincidence that a treatment is found, if we killed everyone with a disease there would be no search for treatment.
More like, to determine whether people are paying any attention. (I once took an online personality test which included questions such as “I’ve never eaten before” to prevent people from using bots or similar to screw up their data.)
It’s hard to get people to answer such things straightforwardly. I once included “Some people have fingernails” in a poll, as about the most uncontroversially true thing I could think of, and participants found a way to argue that it wasn’t true—since “some” understates the proportion.
Well… Some people does usually implicate ‘not all people, and not even all people except a non-sizeable minority’, but if we go by implicatures rather than literal meanings, X has fingernails (in contexts where everyone knows X is a human), in my experience at least, usually implicates that X’s fingernails are not trimmed nearly as short as possible, since the literal meaning would be quite uninformative once you know X is a human.
To clarify: A = Dust speck in your eye, and your life is otherwise as it would have been without this deal. B = 3^^^3 years of torture, followed by death.
Is that an easy choice for you? If not, can you summarize your arguments in favor of choosing B?
If not, can you summarize your arguments in favor of choosing B?
Well, if I choose B, I’ll be alive for a very large number of years. I’ll be alive so long, that I expect that I’ll get used to anything deployed to torture me. And I’ll be alive so long, I’d need to study a fair amount of cosmology just to understand what my lifetime will involve, by way of the deaths and rebirths of whole universes or whatever. Some of that would be interesting to see.
The easy thought experiment would be dust speck vs. 3 years of torture followed by death. I think there, I’d go with the speck.
I’ll be alive so long, that I expect that I’ll get used to anything deployed to torture me.
Is this based on the experience of torture victims? I think that “get used to” would more closely resemble “catatonic” than “unperturbed.” I don’t think your ability to be interested would survive very long.
If you’ve acclimated to torture it’s no longer torture.
If you’ve acclimated to torture the effects have likely left you with a life not worth living.
Torture isn’t something you can acclimate yourself to in hypotheticals. E.g., the interlocutor could say “oh you would acclimate to water boarding, well then I’ll scoop your brain out, intercept your sensory modalities, and feed you horror. but wait, just when you’re getting used to it I wipe your memory.”
All this misses the point of the hypothetical by being too focused on the details rather than the message. Have you told someone the trolley experiment and had them say something like “but I would call the police, or I’m not strong enough to push a fat man over” and have to reform the experiment over and over until they got the message?
Torture isn’t something you can acclimate yourself to in hypotheticals....
This is a fair point. Though my response was very much intended to be a joke.
All this misses the point of the hypothetical by being too focused on the details rather than the message. Have you told someone the trolley experiment and had them say something like “but I would call the police, or I’m not strong enough to push a fat man over” and have to reform the experiment over and over until they got the message?
I think this is wrong: saying you’d yell real loud or call the police or break the game somehow is exactly the right response. It shows that someone is engaging with the problem as a serious moral one, and it’s no accident that it’s people who hear these problems for the first time that react like this. They’re the only ones taking it seriously: moral reasoning is not hypothetical, and what they’re doing is refusing to treat the problem hypothetically.
Learning to operate within the hypothetical just means learning to stop seeing it as an opportunity for moral reasoning. After that, all we’re doing is trying to maximize a value under a theory. But that’s neither here nor there.
I think this is wrong: saying you’d yell real loud or call the police or break the game somehow is exactly the right response. It shows that someone is engaging with the problem as a serious moral one,
It is not clear to me that that is a more “right” response than engaging with the problem as a pedagogic tool in a way that aligns with the expectations of the person who set it to me. Indeed, I’m inclined to doubt it.
In much the same way: if I’m asked to multiply 367 by 1472 the response I would give in the real world is to launch a calculator application, but when asked to do this by the woman giving me a neuropsych exam after my stroke I didn’t do that, because I understood that the goal was not to find out the product of 367 and 1472 but rather to find out something about my brain that would be revealed by my attempt to calculate that product.
I agree with you that it’s no accident that people react like this to trolley problems, but I disagree with your analysis of the causes.
It is not clear to me that that is a more “right” response than engaging with the problem as a pedagogic tool in a way that aligns with the expectations of the person who set it to me.
You called the trolly problem a pedagogic tool: what do you have in mind here specifically? What sort of work do you take the trolly problem to be doing?
It clarifies the contrast between evaluating the rightness of an act in terms of the relative desirability of the likely states of the world after that act is performed or not performed, vs. evaluating the rightness of an act in other terms.
Okay, that sounds reasonable to me. But what do we mean by ‘act’ in this case? We could for instance imagine a trolly problem in which no one had the power to change the course of the train, and it just went down one track or the other on the basis of chance. We could still evaluate one outcome as better than the other (this must be the one man dying instead of five), but there’s no action.
Are we making a moral judgement in that case? Or do we reason differently when an agent is involved?
What I say about your proposed scenario is that the hypothetical world in which five people die is worse than the hypothetical world in which one person dies, all else being equal. So, no, my reasoning doesn’t change because there’s an agent involved.
But someone who evaluates the standard trolley problem differently might come to different conclusions.
For example, I know any number of deontologists who argue that the correct answer in the standard trolley problem is to let the five people die, because killing someone is worse than letting five people die. I’m not exactly sure what they would say about your proposed scenario, but I assume they would say in that case, since there’s no choice and therefore no “killing someone” involved, the world where five people die is worse.
Similarly, given someone like you who argues that the correct answer in the standard trolley problem is to “yell real loud or call the police or break the game somehow,” I’m not sure what you would say about your own proposed scenario.
It shows that someone is engaging with the problem as a serious moral one
I think it shows someone is trying to “solve” a hypothetical or be clever, because with a trivial amount of deliberation they would anticipate the interlocutors response and reform. Moreover, none of this engages the point of the exercise for which you’re free to argue without being opaque. E.g., “okay, clearly the point of this trolley experiment is to see if my moral intuitions align with consequentialism or utilitarianism, I don’t think this experiment does that because blah blah blah.”
Moreover, moral reasoning is hypothetical if you’re sufficiently reflective.
Moreover, moral reasoning is hypothetical if you’re sufficiently reflective.
Well, in what kinds of things does moral reasoning conclude? I suppose I would say ‘actions and evaluations’ or something like that. Can you think of anything else?
Moral reasoning should inform your moral intuitions—what you’ll do in the absence of an opportunity to reflect. How do you prepare your moral intuitions for handling future scenarios?
Well, regardless of whether we have time to reflect or not, I take it moral reasoning or moral intuitions conclude either in an action or in something like an evaluative judgement. This would distinguish such reasoning, I suppose, from theoretical reasoning which begins from and concludes in beliefs. Does that sound right to you?
An evaluative judgement is an action; you’re fundamentally saying moral reasoning has consequences. I agree with that, of course. I don’t think it disguishes it from theorical reasoning.
By ‘action’ I mean something someone might see you do, something undertaken intentionally with the aim of changing something around you. But when we ask someone to react to a trolly problem, we don’t expect them to act as a result of their reasoning (since there’s no actual trolly). We just want them to reply. So sometimes moral reasoning concludes merely in a judgement, and sometimes it concludes in an action (if we were actually in the trolly scenario, for example) that will, I suppose, also involve a judgement. Does all this seem reasonable to you?
This would go quicker if you gave your conclusion and then we talked about the assumptions, rather than building from the assumptions to the conclusion (I think it’s that you want to say hypotheticals produce different results than reality). But to answer your question, I don’t think that giving a result to the trolley problem merely results in a judgement. I think it also potentially results in reflective equilibrium of moral intuitions, which then possibly results in different decisions in the future (I’ve had this experience). I think it also potentially affects the interlocutor or audience.
This would go quicker if you gave your conclusion and then we talked about the assumptions, rather than building from the assumptions to the conclusion.
I’ve already given you my conclusion, such as it is: not that hypotheticals produce different results, but that reasoning about hypotheticals can’t be moral reasoning. I’m just trying to think through the problem myself, I don’t have a worked out theory here, or any kind of plan. If you have a more productive way to figure out how hypotheticals are related to moral reasoning then I’m happy to pursue that.
But to answer your question, I don’t think that giving a result to the trolley problem merely results in a judgement.
Right, but I’m just talking about the posing of the question as an invitation for someone to think about it. The aim or end result of that thinking is some kind of conclusion, and I’m just asking what kinds of conclusions moral reasoning ends in. Since we use moral reasoning in deciding how to act, I take it for granted that one kind of conclusion is an action: “It is right to X, and possible for me to X, therefore...” and then comes the action. When someone is addressing a trolly problem, they might think to themselves: “If one does X, one will get the result A, and if one does Y, one will get the result B. A is preferable to B, so...” and then comes the conclusion. The conclusion in this case is not an action, but just the proposition that ”...given the circumstances, one should do X.”
ETA: So, supposing that reasoning about the trolly problem here is moral reasoning (as opposed to, say, the sort of reasoning we’re doing when we play a game of chess) then moral reasoning can conclude sometimes in actions, and sometimes in judgements.
Suppose I sit down at time T1 to consider the hypothetical question of what responses I consider appropriate to various events, and I conclude that in response to event E1 I ought to take action A1. Then at T2, E1 occurs, and I take action A1 based on reasoning of the form “That’s E1, and I’ve previously decided that in case of E1 I should perform A1, so I’m going to perform A1.”
If I’ve understood you correctly, the only question being discussed here is whether the label “moral reasoning” properly applies to what occurs at T1, T2, both, or neither.
Can you give me an example of something that might be measurably different in the world under some possible set of conditions depending on which answer to that question turns out to be true?
If I’ve understood you correctly, the only question being discussed here is whether the label “moral reasoning” properly applies to what occurs at T1, T2, both, or neither.
You’ve understood me perfectly, and that’s an excellent way of putting things. I think there’s an interpretation of those variables such that both what occurs at T1 and at T2 could be called moral reasoning, especially if one expects E1 to occur. But suppose you just, by way of armchair reasoning, decide that if E1 ever happens, you’ll A1. Now suppose E1 has occured, but suppose also that you’ve forgotten the reasoning which lead you to conclude that A1 would be right: you remember the conclusion, but you’ve forgotten why you thought it. That scenario would, I believe, satisfy your description, and it would be a case in which your action is quite suspect. Not wholly so, since you may have good reason to believe your past decisions are reliable, but if you don’t know why you’re acting when you act, you’re not acting in a fully rational way.
I think it would be appropriate to say, in this case, that you are not to be morally praised (e.g. “you’re a good person”, “You’re a hero” etc.) for such an action (if it is good) in quite the measure you would be if you knew what you were doing. I bring up praise, just because this is an easy way for us to talk about what we consider to be the right response to morally good action, regardless of our theories. Does all this sound reasonable?
If what went on at T1 was fully moral reasoning, then no part of the moral action story seems to be left out: you reasoned your way to an action, and at some later time undertook that action. But if it’s true that we would consider an action in which you’ve forgotten your reasoning a defective action, less worthy of moral praise, then we consider it important that the reasoning be present to you as you act.
And I take it for granted, I suppose, that we don’t consider it terribly praiseworthy for someone to come to a bunch of good conclusions from the armchair and never make any effort to carry them out.
I’ll point out again that the phrase “moral reasoning” as you have been using it (to mean praiseworthy reasoning) is importantly different from how that phrase is being used by others.
That aside, I agree with you that in the scenario you describe, my reasoning at T2 (when E1 occurs) is not especially praiseworthy and thus does not especially merit the label “moral reasoning” as you’re using it. I don’t agree that my reasoning at T1 is not praiseworthy, though. If I sit down at T1 and work out the proper thing to do given E1, and I do that well enough that when E1 occurs at T2 I do the proper thing even though I’m not reasoning about it at T2, that seems compelling evidence that my reasoning at T1 is praiseworthy.
If I sit down at T1 and work out the proper thing to do given E1, and I do that well enough that when E1 occurs at T2 I do the proper thing even though I’m not reasoning about it at T2, that seems compelling evidence that my reasoning at T1 is praiseworthy.
Sure, we agree there, I just wanted to point out that the, shall we say, ‘presence’ of the reasoning in one’s action at T2 is both a necessary and sufficient condition for the action’s being morally praiseworthy if it’s good. The reasoning done at T1 is, of itself, neither necessary nor sufficient.
I don’t agree that the action at T2 is necessary. I would agree that in the absence of the action at T2, it would be difficult to know that the thinking at T1 was praiseworthy, but what makes the thinking at T1 praiseworthy is the fact that it led to a correct conclusion (“given E1 do A1”). It did not retroactively become praiseworthy when E1 occurred.
So you would say that deliberating to the right answer in a moral hypothetical is, on its own, something which should or could earn the deliberator moral praise?
Would you say that people can or ought to be praised or blamed for their answers to the trolly problem?
I would say that committing to a correct policy to implement in case of a particular event occurring is a good thing to have done. (It is sometimes an even better thing to have done if I can then articulate that policy, and perhaps even that commitment, in a compelling way to others.)
I think that’s an example of “deliberating to the right answer in a moral hypothetical earning moral praise” as you’re using those phrases, so I think yes, it’s something that could earn moral praise.
People certainly can be praised or blamed for their answers to the trolley problem—I’ve seen it happen myself—but that’s not terribly interesting.
More interestingly, yes, there are types of answers to the standard trolley problem I think deserve praise.
In case of a possible misunderstanding: I didn’t mean to imply that moral reasoning is literally hypothetical, but that hypotheticals can be a form of moral reasoning (and I hope we aren’t arguing about what ‘reasoning’ is). The problem that I think you have with this is that you believe hypothetical moral reasoning doesn’t generalize? If so, let me show you how that might work.
Hmm, save one person or let five people die.
My intuition tells me that killing is wrong.
Wait, what is intuition and why should I trust it?
I guess it’s the result of experience: cultural, personal, and evolution.
Now why should I trust that?
I suppose I shouldn’t because there’s no guarantee that any of that should result in the “right”
answer. Or even something that I actually prefer.
Hmm… If I look at the consequences, I see I prefer a world in which the five people live.
And this could go on and on until you’ve recalibrated your moral intuitions using hypothetical moral reasoning, and now when asked a similar hypothetical (or put in a similar situation) your immediate intuition is to look at the consequences. Why is the hypothetical part useful? It uncovers previously unquestioned assumptions. It’s also a nice compact form for discussing such issues.
but that hypotheticals can be a form of moral reasoning (and I hope we aren’t arguing about what ‘reasoning’ is).
We’re not, and I understand. We do disagree on that claim: I’m suggesting that no moral reasoning can be hypothetical, and that if some bit of reasoning proceeds from a hypothetical, we can know on the basis of that alone that it’s not really moral reasoning. I’m thinking of moral reasoning as the kind of reasoning you’re morally responsible for: if you reason rightly, you ought to be praised and proud, and if you reason wrongly, you ought to be blamed and ashamed. That sort of thing.
Hmm… If I look at the consequences, I see I prefer a world in which the five people live.
This is a good framing, thanks. By ‘on and on’ I assume you mean that the reasoner should go on to examine his decision to look at expected consequences, and perhaps more importantly his preference for the world in which five people live. After all, he shouldn’t trust that any more than the intuition, right?
I’m thinking of moral reasoning as the kind of reasoning you’re morally responsible for: if you reason rightly, you ought to be praised and proud, and if you reason wrongly, you ought to be blamed and ashamed. That sort of thing.
Can’t that apply to hypotheticals? If you come to the wrong conclusion you’re a horrible person, sort of thing.
I would probably call “moral reasoning” something along the lines of “reasoning about morals”. Even using your above definition, I think reasoning about morals using hypotheticals can result in a judgment, about what sort of action would be appropriate in the situation.
I’m thinking of moral reasoning as the kind of reasoning you’re morally responsible for: if you reason rightly, you ought to be praised and proud, and if you reason wrongly, you ought to be blamed and ashamed.
That can’t be what people normally mean by “moral reasoning”. Do you have a philosophy background?
I’m suggesting that no moral reasoning can be hypothetical
I don’t see why that would be the case. Cheap illustration:
TEACHER: Jimmy, suppose I tell you that P, and also that P implies Q. What does that tell you about Q? JIMMY: Q is true. TEACHER: That’s right Jimmy! Your reasoning is praiseworthy! JIMMY: Getting the right answer while reasoning about that hypothetical fills me with pride!
I don’t see why that would be the case. Cheap illustration:...
You’ve taken my conditional: “If something is moral reasoning, it is something for which we can be praised or blamed” for a biconditional. I only intend the former. ETA: I should say more. I don’t mean any kind of praise or blame, but the kind appropriate to morally good or bad action. One might believe that this isn’t different in kind from the sort of praise we offer in response to, say, excellence in playing the violin, but I haven’t gotten the sense that this view is on the table. If we agree that there is such a thing as distinctively moral praise or blame, then I’ll commit to the biconditional.
I suspect ABrooks is continuing his tradition of interpreting “X reasoning” to mean reasoning that has the property of being X, rather than reasoning about X.
If I’m right, I expect his reply here is that your example is not of hypothetical reasoning at all—supposing that actually happened, Jimmy really would be reasoning, so it would be actual reasoning. Sure, it would be reasoning about a hypothetical, but so what?
I share your sense, incidentally, that this is not what people normally mean, either by “moral reasoning” or “hypothetical reasoning.:”
I suspect ABrooks is continuing his tradition of interpreting “X reasoning” to mean reasoning that has the property of being X, rather than reasoning about X.
It’s not an interpretation, it’s a claim. If something is reasoning about moral subject matter, then, I claim, it is the sort of thing that is (morally) praiseworthy or blameworthy. When we call someone bad or good for something they’ve done, we at least in part mean to praise or blame their reasoning. And one of the reasons we call someone good or bad, or their action good or bad, is an evaluation of their reasoning as good or bad. And praise and blame are, of course, the products of moral reasoning. And we do consider them to be morally valued: to (excepting cases of ignorance) praise bad people is itself bad, and to blame good people is itself good.
Now, the claim I’m arguing against is the claim that there is another kind of moral reasoning which is a) neither praiseworthy, nor blameworthy, b) does not result in an action or an evaluation of an actual person or action, and c) is somehow tied to or predictive of reasoning that is praiseworthy, blameworthy, and resulting in action or actual evaluation.
So I’ve never intended ‘moral reasoning’ to mean ‘reasoning that is moral’ except as a consequence of my argument. That phrase means, in the first place, reasoning about moral matters. Same goes for how I’ve been understanding ‘hypothetical reasoning’. (ETA: though here, I can’t see how one could draw a distinction between ‘reasoning from a hypothetical’ and ‘reasoning that is hypothetical’. I’m not trying to talk about ‘reasoning about a hypothetical’ in the broadest sense, which might include coming up with trolly problems. I only mean to talk about reasoning that begins with a hypothetical.)
If something is reasoning about moral subject matter, then, I claim, it is the sort of thing that is (morally) praiseworthy or blameworthy.
Er. Just to make sure I understand this: is “whether it’s correct to put babies in a blender for fun” moral subject matter? If so, does it follow that if I am reasoning about whether it’s correct to put babies in a blender for fun, I am therefore something that is reasoning about moral subject matter? If so, does it follow that I am the sort of thing that is morally praiseworthy or blameworthy?
When we call someone bad or good for something they’ve done, we at least in part mean to praise or blame their reasoning.
Sure, if I were to say “Sam is a bad person” because Sam did X, I would likely be trying to imply something about the thought process that led Sam to do X.
And one of the reasons we call someone good or bad, or their action good or bad, is an evaluation of their reasoning as good or bad.
I agree that it’s possible for me to call Sam “good” or “bad” based on some aspect of their reasoning, as above, though I don’t really endorse that usage. I agree that it’s possible to call Sam’s act “good” or “bad” based on some aspect of Sam’s reasoning, although I don’t endorse that usage either. I agree that it’s possible to label reasoning that causes me to call either Sam or Sam’s act “good” or “bad” as “good reasoning” or “bad reasoning”, respectively, but this is neither something I could ever imagine myself doing, nor the interpretation I would naturally apply to labeling reasoning in this way.
And praise and blame are, of course, the products of moral reasoning.
That’s not clear to me.
to (excepting cases of ignorance) praise bad people is itself bad,
That’s not clear to me either.
and to blame good people is itself good.
That’s definitely not clear to me.
So I’ve never intended ‘moral reasoning’ to mean ‘reasoning that is moral’ except as a consequence of my argument. That phrase means, in the first place, reasoning about moral matters.
Ah, OK. That was in fact not clear; thanks for clarifying it.
Just to make sure I understand this: is “whether it’s correct to put babies in a blender for fun” moral subject matter?
Not necessarily, it may or may not be taken up as a moral question. We can, for example, study just how much fun it is and leave aside the question of its moral significance. If you’re reasoning about whether or not it’s right in some moral sense to put babies in a blender, then you’re doing something like moral reasoning, but if this were purely in the hypothetical then I think it would fall short. If you were seriously considering putting babies in a blender, then I think I’d want to call it moral reasoning, but in this case I think you could obviously be praised or blamed for your answer (well, maybe not praised so much).
and to blame good people is itself good.
That’s definitely not clear to me.
Sorry, typo. I mean’t ‘to blame good people (or to blame people for good actions) is bad.’ It shows some praiseworthy decency to appreciate the moral life of, I donno, MLK. It shows real character to stick up for a good but maligned person. Likewise, it shows some shallowness to have praised someone who only appeared good, but was in fact bad. And it shows some serious defect of character to praise someone we know to be bad (I donno, Manson?).
I agree that it’s possible for me to call Sam “good” or “bad” based on some aspect of their reasoning, as above, though I don’t really endorse that usage.
What’s the difference between agreeing here, and endorsing the usage?
OK, so just to be clear, you would say that the following are examples of moral reasoning...
“It would be fun to put this baby in that blender, and I want to have fun, but it would be wrong, so I won’t”
“It would be wrong to put this baby in that blender, and I don’t want to be wrong, but it would be fun, so I will”
...and the following are not:
“In general, putting babies in blenders would be fun, and I want to have fun, but in general it would be wrong, so if a situation arose where I had a baby and a blender and could put one inside the other with impunity, I would not do so, all else being equal.”
“In general, putting babies in blenders would be wrong, and I don’t want to be wrong, but in general it would be fun, so if a situation arose where I had a baby and a blender and could put one inside the other with impunity, I would do so, all else being equal.”
Yes? No?
If so, I continue to disagree with you; I absolutely would call those last two cases examples of moral reasoning. If not, I don’t think I’m understanding you at all.
What’s the difference between agreeing here, and endorsing the usage?
If A is some object or event that I observe, and L is a label in a language that consistently evokes a representation of A in the minds of native speakers, I agree that it’s possible for me to call A L. If using L to refer to A has other effects beyond evoking A, and I consider those effects to be bad, I might reject using L to refer to A.
For example, I agree that the label “faggot” reliably refers to a male homosexual in American English, but I don’t endorse the usage in most cases because it’s conventionally insulting. (There are exceptions.)
‘to blame good people (or to blame people for good actions) is bad.’ It shows some praiseworthy decency
Incidentally, here you demonstrate one of the behaviors that causes me not to endorse the usage of calling Sam “good” or “bad” in this case. First you went from making an observation about a particular act of reasoning to labeling the reasoner in a particular way, and now you’ve gone from labeling the reasoner in that way to inferring other facts about the reasoner. I would certainly agree that the various acts we’re talking about are evidence of praiseworthy decency on Sam’s part, but the way you are talking about it makes it very easy to make the mistake of treating them as logically equivalent to praiseworthy decency.
People do this all the time (e.g., fundamental attribution fallacy), and it causes a lot of problems.
I’m thinking of moral reasoning as the kind of reasoning you’re morally responsible for: if you reason rightly, you ought to be praised and proud, and if you reason wrongly, you ought to be blamed and ashamed.
Oh! I understand you now. Thanks for clarifying this.
An obvious argument in favor of B is that you get to live for 3^^^3 years. A reframing:
A = Dust speck in your eye, after which you read a normal life except that you cease to exist a mere 60 years later. B = Tortured for the rest of your life, but you never die.
(nods) That seemed the obvious argument, as you say, though it depends on the notion that being tortured for a year is a net utility gain (relative to not existing for that year at all), which seemed implausible to me. But it turns out that is indeed what ABrooks meant.
I generally avoid downvoting comments that are direct responses to me. I’m not exactly sure why, beyond a sense that it just feels wrong, although I can justify it in a number of different ways that I’m pretty sure aren’t my real reasons.
I do the same. The reasoning that comes to mind is that the timing tends to imply that you did it, and that that—especially if you’re already in an adversarial mode—can provoke a cycle of retaliation that’s harmful to your karma and doesn’t carry much informative value. Short of that, I feel it carries adversarial implications that’re harmful to the quality of discussion.
I’m reasonably sure that that’s my true objection.
Yeah, that’s plausible in my case as well. Evidence in favor of it is that I do become mildly anxious when people who are responding to me get downvoted by others, which suggests that I fear retaliation.
I thought that too, but I assumed I’d die right after being tortured anyway. And I’d rather live to age n without ever being tortured than live to age n + m being tortured for m years.
Doesn’t appreciably constrain your behavior, though, unless you happen to be the star of a popular Showtime series or something. Declaring a policy is only meaningful if it actually affects your choices, which in this case only makes sense if you expect to be considering mass murder as a solution to your problems.
And in a situation as extreme as that, I wouldn’t be surprised if some otherwise unthinkable subjective downsides came up.
We don’t have a magical cure for depression, but if someone is depressed, they cannot make rational decisions for themselves anyway, so they cannot decide to kill themselves legitimately.
Suppose I say now, in my non-depressed state, that if I were ever to become so depressed that I wanted to die, I’d prefer that this want be fulfilled.
We cannot allow this any more than we can allow people to sold themselves to slavery as a loan guarantee.
Which doesn’t preclude allowing both. I can see benefits of allowing the latter. Or, more to the point, I can see situations where forbidding the latter is morally abhorrent. Specifically, when there is not a safety net in place that prevents people starving or otherwise suffering for the lack of finances that they should be able to acquire.
We pretty much know how to deal with even the most severe pain—very large doses of opiates to get rid of it, and large doses of stimulants like amphetamines to counter the side effects.
I’d be incredibly surprised if this actually worked clinically.
That doesn’t answer my question. I’m not interested in the ethical, legal, and societal barriers to adequate pain management, which is what your link covers as far as I can tell.
I want to know how one intends to circumvent opiate tolerance, and whether or not large doses of stimulants really do counteract the side effects of large doses of opiates in a large enough class of people to be effective, without the side effects of these stimulants becoming undesirable.
Assembling a drug cocktail in order to achieve some central result while minimizing side effects, with ongoing adjustment as the severity of the underlying condition and the patient’s sensitivity to the drugs in question both change, is one of those complicated problems which modern medicine is nonetheless capable of solving, given adequate resources.
I think if someone’s paying you do perform a service for them, that counts as doing it for their benefit. You’re benefiting from the money, not the act itself.
A key point is that they don’t need to advocate the legalization of infanticide, they just need to be able to cogently address the arguments for and against it. Personally, I think that in the US at this time optimal law might restrict abortion significantly more than it currently does and also that in many past cultural contexts efforts to outlaw or seriously deter infanticide would have been harmful. Just disentangling morality from law competently gets a person props.
First, when a woman is pregnant but will be unable to raise her child we do not force a woman to give birth to give up the baby for adoption. This is because bringing a child to term is a painful, expensive and dangerous nine-month ordeal which we do not think women should be forced into. In what possible circumstances is infanticide ethically permissible when the baby is born, the woman has already paid the cost of pregnancy and giving birth, and adoption is an option?
In general, I’m not sure it follows from the fact that persons aren’t magic that persons are less valuable than we thought. Maybe babies are just glorified goldfish. Maybe they aren’t valuable in the way we thought they were. But I haven’t seen that evidence.
Due to a severe birth defect, the baby is profoundly mentally retarded, will suffer severe pain its entire life, and will most likely not live to see its fifth birthday.
Unfortunately, thus phrased it fails as a litmus test. For better discrimination, leave out the part about childhood death, then the pain. Then, if you’re adventurous, the retardation.
Once you’ve left out the pain I no longer think killing the baby is ethically permissible. And I don’t see how knowing that people don’t have souls alters my position.
Most people’s moral gut reactions say that humans are very important, and everything else much less so. This argument is easier to make “objective” if humans are the only things with everlasting souls.
Once you get rid of souls, making the argument that humans have some special moral place in the world becomes much more difficult. It’s probably an argument that is beyond the reach of the average person. After all, in the space of “things that one can construct out of atoms”, humans and goldfish are very, very close.
I like what Hook wrote. If I believed that babies were valuable because they have souls and then was told, “no they don’t have souls”, I might for a while value them less. But it has been a very long time since I believed in souls and the value I assign to babies is no longer related at all to my belief about souls (if it ever was).
After all, in the space of “things that one can construct out of atoms”, humans and goldfish are very, very close.
Sure, they just don’t resemble each other in many morally significant ways (the exception, perhaps, being some kind of experience of pain). There is no reason to think the facts that determine our ethical obligations make use of the same kinds of concepts and classifications we use to distinguish different configurations of atoms. Humans and wet ash are both mostly carbon and water, and so have a lot more in common than, say, the Sun. But wet ash and the sun and share more of the traits we’re worried about when we’re thinking about morality. The same goes for aesthetic value, if we need a non-ethics analogy.
I think “making the argument that humans have some special moral place in the world” in the absence of an eternal soul is very easy for someone intelligent enough to think about how close humans and goldfish are “in the space of ‘things that one can construct out of atoms.’”
Morality is complicated and abstract. Maybe cetaceans, chimps, and/or parrots have some concept of morality which is simply beyond the scope of the simple-grammar, concrete-vocabulary interspecies languages so far developed.
Show me someone who actually needs to be convinced. Just about everyone acts as if that is true. One could argue that they are just consequentialists trying to avoid the bad consequences of treating people as if they are not morally special. I’m not even sure that is the psychological reality for psychopaths though.
Also, a corollary of what Matt said, if humans aren’t morally special, is anything?
Leaving aside the physical complications of moving cows, I think most vegetarians would find the decision to push a cow onto the train tracks to save the lives of four people much easier to make than pushing a large man onto the tracks, implying that humans are more special than cows.
EDIT:
The above scenario may not work out so well for Hindus and certain extreme animal rights activists. It may be better to think about pushing one cow to save four cows vs. one human to save four humans. It seems like the cow scenario should be much less of a moral quandary for everyone.
I agree that they would probably have that reaction, but that’s not the question; the question is whether that’s a rational reaction to have given relatively simple starting assumptions.
‘Starting assumptions’ as I used it is basically the same concept as ‘terminal moral values’, and a terminal moral value that refers to humans specifically is arguably more complex than one that talks about life in general or minds in general.
More-complex terminal moral values are generally viewed with some suspicion here, because it’s more likely that they’ll turn out to have internal inconsistencies. It’s also easier to use them to rationalize about irrational behavior.
I think “making the argument that humans have some special moral place in the world” in the absence of an eternal soul is very easy for someone intelligent enough to think about how close humans and goldfish are “in the space of ‘things that one can construct out of atoms.’”
You seem to be equivocating. What do you really think?
(1) Do you believe there are logical reasons for terminal values?
(2) Do you believe that it would be easy to argue that humans have special moral status even without divine external validation (e.g., without a soul)?
This doesn’t argue that infants have zero value, but instead that they should be treated more like property or perhaps like pets (rather than like adult citizens).
You haven’t taken account of discounted future value. A child is worth more than a chimpanzee of equal intelligence because a child can become an adult human. I agree that a newborn baby is not substantially more valuable than a close-to-term one and that there is no strong reason for caring about a euthanised baby over one that is never born, but I’m not convinced that assigning much lower value to young children is a net benefit for a society not composed of rationalists (which is not to say that it is not an net benefit, merely that I don’t properly understand where people’s actions and professed beliefs come from in this area and don’t feel confident in my guesses about what would happen if they wised up on this issue alone).
The proper question to ask is “If these resources are not spent on this child, what will they be spent on instead and what are the expected values deriving from each option?” Thus contraception has been a huge benefit to society: it costs lots and lots of lives that never happen, but it’s hugely boosted the quality of the lives that do.
I do agree that willingness to consider infanticide and debate precisely how much babies and foetuses are worth is a strong indicator of rationality.
My mother made this argument to me probably when I was in high school. Given my position as past infanticide candidate, it was an odd conversation. For the record, she was willing to go up to two or six years old, I think.
And let us not forget the Scrubs episode she also agreed with: “Having a baby is like getting a dog that slowly learns to talk.”
My mother made this argument to me probably when I was in high school. Given my position as past infanticide candidate, it was an odd conversation.
Hey, now you know you were kept around because you were actually wanted, not out of a dull sense of obligation. It’s like having a biological parent who is totally okay with giving up children for adoption—and stuck around!
That’s an interesting take. She clearly loves me and my siblings and has never hurt anyone to the best of my knowledge, besides. So, it wasn’t an uncomfortable topic—only a bit of an odd position to be in.
Although, I also have to point out adoption does not carry the death penalty, so I can imagine a situation in which my hypothetical parent opts not to kill me because they think the fuzz will catch them.
Hey, now you know you were kept around because you were actually wanted, not out of a dull sense of obligation.
Eliezer, your thought processes and emotions are quite a bit different from those of most currently living humans. And that mostly leaves you quite well-off, but you’ve always got to account for that before you say something like this. How the hell do you know what others, especially children, would feel in an odd situation like that? Me, I know for sure that I’d MUCH rather have a cold/distant but dutiful and conscientous parent than one who could really, seriously plan to kill Pre-Me for their own convenience.
(If that was supposed to be a joke, I claim that it was in bad taste, just like an anti-AI LessWronger’s joke about planning to assassinate you and your colleagues would be.)
I mean, if the general form of your claim is that a joke whose punchline is “your parents wanted you” is in bad taste just as a joke whose punch line is “I’m going to kill you” is, I simply disagree. I find this unlikely, I just mention it because that’s the vast difference between the two examples that jumped out at me.
If the general form of your claim is that a joke that mentions the (unactualizable) possibility of my infanticide is in bad taste just as a joke that mentions the (thus-far-unactualized, but still viable) possibility of my assassination, I also disagree, though I have more sympathy for the claim. I find this more likely.
If it’s something else, I might agree.
Of course, if you don’t actually mean to make a general claim about what is or isn’t in bad taste, but rather to assert somewhat indirectly that references to infanticide upset you and you’d rather not read them, that’s a whole different kettle of fish and my question is meaningless.
Jokes aren’t only about punchlines; here Eliezer was talking about how the (apparently REAL) fact that a murder was contemptated by the guy’s own mother ended up having an upside.
Yes, that’s true, he was indeed talking about that. I infer that your claim is that talking about that is in sufficiently bad taste to be worth calling out. Thanks for clarifying.
I have said before “I’m a moderate on abortion—I feel it should be okay up to the fifth trimester.” While this does shock people into adjusting what boundaries might be considered acceptable, I no longer think it is something useful to say in most fora. Too much chance of offending people and just causing their brains to shut off.
Don’t unnecessarily cause them to suffer, but on the other hand you can choose to euthanize your own, if you wish, with no criminal consequences.
Yes, I should also be allowed to kill adults. Especially if they have it coming. After all, the infant still has a chance to grow up to make a worthwhile contribution while there are many adults that are clearly a waste of good oxygen or worse!
I’d say the primary value of an infant is the future value of an adult human minus the conversion cost. Adult humans can be enormously valuable, but sometimes, the expected benefits just can’t match the expected costs, in which case infanticide would be advisable.
However, both costs and benefits can vary by many orders of magnitude depending on context, and there’s no reliable, generally-applicable method to predict either. No matter how bad it looks, someone else might have a more optimistic estimate, so it’s worth checking the market (that is, considering adoption).
Is it acceptable to assume that the conversion cost up to a newborn is less than the rest of the way to an adult?
(Think this through before reading on, to avoid biased thinking about the above (This is called “Meditate”, right?))
Given that, wouldn’t a rich excentric that commits to either spend a pool of money on paying people to roll boulders up and down a hill or on raising the next child he makes you pregnant with cause you to not be allowed to say no? (Edited for clarity)
Is it acceptable to assume that the conversion cost up to a newborn is less than the rest of the way to an adult?
It quite obviously is.
Given that, wouldn’t a rich excentric that commits to either spend a pool of money on paying people to roll boulders up and down a hill or on raising your child cause you to not be allowed to refuse him?
If you mean as an alternative to infanticide, definitely. What’s your point?
What I meant to say is that this complete stranger wants to have a child with Strange7 (for this hypothetical Strange7 can get pregnant) and it would be as wrong/illegal for Strange7 to not do so as late abortion or infanticide would be. (Edited grandparent for clarity)
If this hypothetical rich person is able and willing to cover all the costs of me bearing a child and the child being raised, they can draft a contract and present it to me. What greater good would be served by making it illegal for me to refuse? Such a law would weaken my negotiating position, increasing the chances that the rich eccentric would be able to avoid internalizing some of the long-term costs and/or that I would be put in the position of having to give up some marginally more lucrative prospect in order to avoid the legal penalty.
I’d rather not try to derive the full ethical calculus of abusive relationships and rape from first principles, but i can point you at some people who’ve studied the field enough to come up with excellent working approximations for most real-world cases.
Are you allowed to use moral questions as litmus tests for rationality? Paper clippers are rational too.
It isn’t inconceivable that a human might just value babies intrinsically (rather than because they possess an amount of intellect, emotion, and growth potential).
If anyone here has been reading this and trying to use more abstract values to try to justify why one should not to harm babies, and is unable to come up with anything, and still feels a strong moral aversion to anyone harming babies anywhere ever, then maybe it means you just intrinsically value not harming babies? As in, you value babies for reasons that go beyond the baby’s personhood or lack thereoff?
(By the way, the abstract reason i managed to come up with was that current degree of personhood and future degree of personhood interact in additive ways. I’ll react with appreciation to someone poking a hole in that, but I suspect I’ll find another explanation rather than changing my mind. It’s not that I necessarily value babies intrinsically—it’s more that I don’t fully understand my own preferences at an abstract level, but I do know that a moral system that allows gratuitous baby-killing must be one that does not match my preferences. So if you poke a hole in my abstract reasons, it merely means that my attempt to abstractly convey my preferences was wrong. It won’t change the underlying preference.)
<But a good chunk of rationality is separating emotions from logic
Even if I insert “epistemic”, i find this only partially true.
Edit: Although, my preferences do agree with yours to the extent that harming a young child does seem worse than harming a baby (though both are terrible enough to be illegal and punishable crimes). So I might respect the idea of merciful killing (in times of famine, for example) at a young age to prevent future death-inducing-suffering.
Agreeing with the logic is OK, but the problem with reductionism is that if you draw no lines, you’ll eventually find that there’s no difference between anything.
Thus the basic reductionist/humanist conflict: how does one you escape the ‘logic’ and draw a line?
Draw a gradient rather than a line. You don’t need sharp boundaries between categories if the output of your judgment is quantitative rather than boolean. You can assign similar values to similar cases, and dissimilar values to dissimilar cases.
See also The Fallacy of Gray. Now you’re obviously not falling for the one-color view, but that post also talks about what to do instead of staying with black-and-white.
Sure. But I was referring to my worry that if you don’t allow your values to be arbitrary (e.g., I don’t care about protecting fetuses but I care about protecting babies), you may find you wouldn’t have any. I guess I’m imagining a story in which a logician tries to argue me down a slippery slope of moral nihilism; there’ll be no step I can point to that I shouldn’t have taken, but I’ll find I stepped too far. When I retreat uphill to where I feel more comfortable, can I expect to have a logical justification?
I’m not sure what “arbitrary” means here. You don’t seem to be using it in the sense that all preferences are arbitary.
a story in which a logician tries to argue me down a slippery slope of moral nihilism
If the nihilist makes a sufficiently circuitous argument, they can ensure that there’s no step you can point to that’s very wrong. But by doing so, they will make slight approximations in many places. Each such step loses an incremental amount of logical justification, and if you add up all the approximations, you’ll find that they’ve approximated away any correlation with the premises. You don’t need to avoid following the argument too far, if you appropriately increase your error bars at each step.
Each such step loses an incremental amount of logical justification, and if you add up all the approximations, you’ll find that they’ve approximated away any correlation with the premises. You don’t need to avoid following the argument too far, if you appropriately increase your error bars at each step.
From your answer, I guess that you do think we have ‘justifications’ for our moral preferences. I’m not sure. It seems to me that on the one hand, we accept that our preferences are arational, but then we don’t really assimilate this. (If our preferences are arational, they won’t have logical justifications.)
I’m not sure what “arbitrary” means here. You don’t seem to be using it in the sense that all preferences are arbitary.
That seemed to be exactly how he’s using it. It would be how I’d respond, had I not worked it through already. But there is a difference between arbitrary in: “the difference between an 8.5 month fetus and a 15 day infant is arbitrary” and “the decision that killing people is wrong is arbitrary”.
Yes, at some point you need at least one arbitrary principle. Once you have an arbitrary moral principle, you can make non-arbitrary decisions about the morality of situations.
There’s a lot more about this in the whole sequence on metaethics.
I am generally confused by the metaethics sequence, which is why I didn’t correct Pengvado.
at some point you need at least one arbitrary principle. Once you have an arbitrary moral principle, you can make non-arbitrary decisions about the morality of situations.
Agreed, as long as you have found a consistent set of arbitrary principles to cover the whole moral landscape. But since our preferences are given to us, broadly, by evolution, shouldn’t we expect that our principles operate locally (context-dependent) and are likely to be mutually inconsistent?
So when we adjust to a new location in the moral landscape and the logician asks up to justify our movement, it seems that, generally, the correct answer would be shrug and say, ‘My preferences aren’t logical. They evolved.’
If there’s a difference in two positions in the moral landscape, we needn’t justify our preference for one position. We just pick the one we prefer. Unless we have a preference for consistency of our principles, in which case we build that into the landscape as well. So the logician could pull you to an (otherwise) immoral place in the landscape unless you decide you don’t consider logical consistency to be the most important moral principle.
But since our preferences are given to us, broadly, by evolution, shouldn’t we expect that our principles operate locally (context-dependent) and are likely to be mutually inconsistent?
Yes.
I have a strong preferences for simple set of moral preferences, with minimal inconsistency.
I admit that the idea of holding “killing babies is wrong” as a separate principle from “killing humans is wrong”, or holding that “babies are human” as a moral (rather than empirical) principle simply did not occur to me. The dangers of generalizing from one example, I guess.
Aren’t abortions unnecessarily painful? This is as strong an argument pro-life as pro-infanticide.
I agree there a continuum between conception and being, say, 2 years old that is only superficially punctuated by the date of birth. Yet our cultural norms are not so inconsistent...
General cultural norms label [infanticide] as horrific, and most people’s gut reactions concur.
For example, many of these same people would find it horrific to kill a late-stage fetus. And they might still find it horrific to murder a younger fetus, but nevertheless respect the mother’s choice in the matter.
Voted up, but I think abortion shouldn’t be legal once the fetus is old enough to have brain activity other than for medical reasons (life of the mother), and I’m an unrepentant speciesist.
As I recall (I haven’t gone to check), fetuses have “brain activity” about the same time they have a beating heart… ie about one week after conception. The brain activity regulates the heartbeat.
The problem with your definition is that it’s very vague—it doesn’t carve reality at the joints.
I myself prefer the “viability” test. If a foetus is removed form the mother.… and survives on it’s own (yes, with life support) then it is “viable” and gets to live. If it’s too undeveloped to live… then it doesn’t. This stage is actually not very far prior to birth—somewhere around 34-36 weeks (out of 40) (again as I recall without having to look it up).
This is very similar to (but gives just a bit more wiggle room) to the “birth” line… ie it disentangles the needs of the mother from the needs of the child, and can be epitomised by the “which would you choose to save” test.
If you had to choose between the life of the mother or the life of the child: if the child is not viable without the mother—then there is no choice necessary: you choose the mother, because choosing the child will result in them both dying. But if the child is viable—then you actually have to choose between them as individual people.
This stage is actually not very far prior to birth—somewhere around 34-36 weeks (out of 40) (again as I recall without having to look it up).
Actually a good bit earlier than that. Like 24, 25 weeks I think is the age where you get 50% survival (with intensive medical care, but you seem to say that’s ok).
Ok… then I should clarify. If the mother has 100% chance to live, but the foetus has only 50% chance to live… and only on seriously intensive care… I do not consider that an equal chance to live.
I use the 34-36 week limit because women are encouraged to continue to 34-36 weeks if at all possible (based on what my mother tells me—who is an experienced midwife).
I guess the 34-36 weeks cutoff is, for me, a reasonable chance at living on just minimal life support. ie the mother and the child have a roughly equal chance of survival… thus it becomes a choice between them where external factors of who they are (or potentially could be) are the main issue—rather than simply based upon survival probability.
So, as technology improves and artificial substitutes become viable progressively earlier in the developmental process, you’ll eventually be advocating adoption as an alternative to the morning-after pill?
If people are willing to pay for the cost of those artificial substitutes—then I would have no problem with it. If there are sufficient people wanting to adopt, too.
There is still a step between “being fine with it” and “advocating for”—that’s turning a “could” into a “should” and you have not given any evidence why this should become a “should”
Right now I’d still not see a benefit for advocating for a child to be placed onto this kind of life-support if the parents do not want it. If the adoptive parents do, then no problems.
The issue with what FAWS is proposing is that “brain activity” is vague int he extreme. Ants have brain activity...
Proposed litmus test: infanticide.
General cultural norms label this practice as horrific, and most people’s gut reactions concur. But a good chunk of rationality is separating emotions from logic. Once you’ve used atheism to eliminate a soul, and humans are “just” meat machines, and abortion is an ok if perhaps regrettable practice … well, scientifically, there just isn’t all that much difference between a fetus a couple months before birth, and an infant a couple of months after.
This doesn’t argue that infants have zero value, but instead that they should be treated more like property or perhaps like pets (rather than like adult citizens). Don’t unnecessarily cause them to suffer, but on the other hand you can choose to euthanize your own, if you wish, with no criminal consequences.
Get one of your friends who claims to be a rationalist. See if they can argue passionately in favor of infanticide.
Kudos to you for forthrightness. But em… no. Ok, first, it seems to me you’ve swept the ethics of infanticide under the rug of abortion, and left it there mostly unaddressed. Is an abortion an “ok if regrettable practice?” You’ve just assumed the answer is always yes, under any circumstances.
I personally say “definitely yes” before brain development (~12 weeks I think), “you need to talk to your doctor” between 12 and 24 weeks, and “not unless it’s going to kill you” after 24 weeks (fully functioning brain). Anybody who knows more about development is welcome to contradict me, but those were the numbers I came up with a few years ago when I researched this.
If a baby/fetus has a mind, in my books it should be accorded rights—more and more so as it develops. I fail to see, moreover, where the dividing line ought to be in your view. Not to slippery-slope you but—why stop at infants?
*(Also note that this is a first-principles ethical argument which may have to be modified based on social expedience if it turns into policy. I don’t want to encourage botched amateur abortions and cause extra harm. But those considerations are separate from the question of whether infants have worth in a moral sense.)
This gave me a nasty turn, because probably the most annoying idea religious people have is that if we’re “just” chemicals, then nothing matters. One has to take pains to say that chemicals are just what we’re made of. We have to be made out of something! :) And what we’re made of has precisely zero moral significance (would we have more worth if we were made out of “spirit”?).
I mean, I could sit here all day and tell you about how you shouldn’t read “Moby Dick,” because it’s just a bunch of meaningless pigment squiggles on compressed wood pulp. In a certain very trivial sense I am absolutely right—there is no “élan de Moby Dick” floating out in the aether somewhere independent of physical books. On the other hand I am totally missing the point.
Sorry, you have a point that my test won’t apply to every rationalist.
The contrast I meant was: if you look at the world population, and ask how many people believe in atheism, materialism, and that abortion is not morally wrong, you’ll find a significant minority. (Perhaps you yourself are not in that group.)
But if you then try to add “believes that infanticide is not morally wrong”, your subpopulation will drop to basically zero.
But, rationally, the gap between the first three beliefs, and the last one, is relatively small. Purely on the basis of rationality, you ought to expect a smaller dropoff than we in fact see. Hence, most people in the first group are avoiding the repugnant conclusion for non-rational reasons. (Or believing in the first three, for non-rational reasons.)
If you personally don’t agree with the first three premises, then perhaps this test isn’t accurate for you.
Well, my comment from http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1sek would probably be better here. I still dispute that argument, as I think this drop-off is justified, even for rationalists.
So your point is that anyone who feels there is a moral difference between infanticide and abortion is irrational?
Because most pro-lifers already say that, in my experience.
The standard answer is that at that point there is no longer a conflict with the rights of the women whose body the infant was hooked into. We don’t generally require that people give up their bodily autonomy to support the life of others.
The complication here is that a responsible, consenting adult tacitly accepts giving up her bodily autonomy (or accepts a risk of doing so) when she has sex. That’s precisely the same reason men are required to pay child support even if they didn’t wish for a pregnancy. (Yes, I see the asymmetry; yes, it sucks).
Case-by-case reasoning is probably a good thing in these circs, but unless the mother was not informed (minor/mental illness) or did not consent, then the only really tenable reason for a late-term abortion I can think of is health. In which case the relative weighing of rights is a tricky business, a buck I will pass to doctors, patients & hospital ethics boards.
This is already a significant retreat from your previously stated position. (“not unless it’s going to kill you” after 24 weeks)
That’s a hell of an assertion. I don’t really see any reason to accept it as other than a normative statement of what you wish would happen.
As you say, there is an asymmetry. Garnishing a wage is a bit different, and seems appropriate to me.
Yes, it is, so long as it is reasoning rather than assertions that this case is different. We have to specify how it is different, and how those differences make a difference. The easiest way for me to do this is to use analogies. This is dangerous of course, as one must keep in mind that they can ignore relevant differences while emphasizing surface similarities.
So, in this case the relevant specialness you’re calling out is that a risky activity was knowingly engaged in that created a person who needs life support for some time, as well as care and feeding far after that. So I’m going to try to set up an analogous situation, but without sex being the act (which I think is irrelevant) coming into the mix. This will also mean another difference: the person will not be “created” except metaphorically from a preëxisting person. I personally don’t see how that would be relevant, but I suppose it is possible for others to disagree.
Suppose a person is driving, and crashes into a pedestrian. This ruptures the liver of the pedestrian. A partial transplant of the driver’s liver will save the pedestrian’s life. Is the driver expected to donate their liver? Should it be required by law?
Note that the donor’s death rate for this operation is under 1%. When we compare this to the statistics for maternal death, we see it is similar to WHO’s 2005 estimate of world average of 900 per 100,000, though developed regions have it far lower at 9 per 100000.
Is it? I suppose it is. I contain multitudes. No, honestly, I just didn’t name all my caveats in the previous post (my bad). Clearly there are two people’s interests to take into consideration here. Also, as I noted, that was an ethical rather than legal argument. I don’t have any strong opinions about what the law should do wrt this question.
I don’t think it’s unreasonable, although you’re right it’s not a fact statement. But I think it’s a fairly well-established principle of ethics & jurisprudence that informed consent implies responsibility. Nobody has to have unprotected sex, so if you (a consenting adult) do so, any reasonably foreseeable consequences are on your shoulders.
It’s a reasonably good analogy I guess. There are two separate questions here: what should the law do, and what should the driver do. I don’t think anybody wants the law to require organ donations from people who behave irresponsibly. However, put in the driver’s shoes, and assuming the collision was my fault, I would feel obligated to donate (if, in this worst-case scenario, I am the only one who can).
There is a slight disanalogy here though, which is that an abortion is an act, whereas a failure to donate is an omission. It’s like the difference between throwing the fat guy on the tracks and just letting the train hit the fat guy.
I’m curious to the reasoning on what the difference is, except maybe that, no better options being available (it seems) we use omission as the default strategy when consequences are not within our grasp (as watching and gathering more information will at least not worsen your later ability to come to a conclusion, with the only caveat that then it may be too late to act).
“Suppose a person is driving, and crashes into a pedestrian. This ruptures the liver of the pedestrian. A partial transplant of the driver’s liver will save the pedestrian’s life. Is the driver expected to donate their liver? Should it be required by law?”
For organ transplantations, the body biochemistries of the organ donor and acceptor must be somewhat compatible, otherwise the transplanted organ gets rejected by the immune system of the acceptor. The best transplantation results are between the identical twins. For unrelated people, there are tests to estimate the compatibility of organs, and databases. A conclusion: The driver is not generally expected to donate their liver, because in the majority of the cases, it would not help the victim.
Imagine an alternate universe, where all the human bodies are highly compatible for transplantation purposes.
Yes, I believe it might become a social norm in this alternate universe, or even a law, that the driver must donate their liver to the victim.
This depends mostly upon whether you think that law should enforce doing actions which save lives with insignificant risk to the actor.
If yes, then this (quite special) case is clear-cut, given a few assumptions (liver matches and is healthy, is not already scheduled for another similarly important surgery, etc. etc.). However, at least as far as I know, this is not the case.
And I doubt it will be soon (simply did not think about whether it should yet). Just an example: In Austria by default all deceased people are potential donors—you have to file an explicit opt-out. This is quite different than for instance in Germany. Therefore we have a relatively good “source” of organs. However, though sometimes under discussion, Germany has not changed its legislation, even with the possibility to compare the numbers. Maybe for religious reasons, or freedom of whomever. I didn’t follow it that close...
If such simple matters (we are talking about already medically dead persons) do not change within years, what can be expected for such, really fundamental, decisions?
I am very much in favour of this sort of policy; it would do no end of good.
The effect of pretending to have opt-out organ donation is small. Austria is unique in really having opt-out organ donation (everywhere else, next of kin decide in practice), so it’s hard to judge the effect, but it’s not an outlier. In the 90s, Spain became the high outlier and Italy ceased being the low outlier, so rapid change is possible without doing anything ethically sensitive. graph. More Kieran Healy links here.
An interesting article.
“Reform of the rules governing consent is often accompanied by an overhaul and improvement of the logistical system, and it is this—not the letter of the law—that makes a difference. Cadaveric organ procurement is an intense, time-sensitive and very fluid process that requires a great deal of co-ordination and management. Countries that invest in that layer of the system do better than others, regardless of the rules about presumed and informed consent.”
In our country, we have an opt-out donation, but I guess the relatives can have a veto. I have seen a physician on TV, who said some scary things openly. Our doctors are standardly overworked and underpayed. Imagine a doctor, who, towards the end of the long shift, sees a patient dying with some of the organs intact. If he decides to report the availability of the organs, he creates an extra, several hours work for himself and others, paperwork included. There is either none or very little financial reward for reporting the organs, I do not remember exactly. They might feel heroic for the first couple of times, but, eventually, they resign and stop making these reports, after they work long enough. I have seen this on TV cca 3 years ago, do not know the current situation.
The driver could instead be made responsible for the victim’s exact medical costs or some fraction thereof, in addition to any punitive or approximated damages. This would provide adequate incentive to seek out ways to reduce those costs, including but not limited to a voluntary donation on the part of the driver or someone who owes the driver a favor.
In the abortion example, the fetus 1) is created already attached and ending ongoing life support may not be the same as requiring that someone who is not providing it provide it, 2) needs life support for an extended period, and 3) can only use the life support of one person.
The complication there is that on the standard view, one cannot give up one’s bodily autonomy permanently. You cannot sell yourself into slavery. The pregnant person always has the right to opt-out of the contract.
Though the fetus would presumably be able to get damages. I guess those get paid to the next-of-kin.
Upvoted entirely for this line, which made me spit coffee when it finally registered.
In the first month of pregnancy, right, but in the seventh month you can Caesarean the baby out of the mother and put it into an incubator, can’t you?
Not without some risk to both, the exact amounts depending on the situation..
(I’m assuming that by “some” you mean ‘larger than that of either abortion or natural childbirth’, otherwise it wouldn’t be relevant. Right?)
Smaller would be relevant too, for the opposite reason.
We don’t?
In what situation, exactly, do we fail to do this? I can’t think of any other real-world situation. I can imagine counterfactual ones, sure, but I’m fairly certain most people see those as analogies for abortion and respond appropriately.
We don’t, for instance, require people to donate redundant organs, nor even blood. Nor is organ donation mandatory even after death (prehaps it should be).
What are some cases where we do require people to give up their bodily autonomy?
Mandatory drug testing?
That’s the big one I can think of, and this usually arises in a very different context where it’s easy to dehumanize those forced to take such tests: alleged criminals and children.
(Even in these contexts, peeing in a cup or taking a breathalyzer is quite a bit less severe than enduring a forced pregnancy. Mandatory blood draws for DUIs do upset a signifianct number of people. How you feel about employment tests and sports doping might depend on how you feel about economic coercion and whether it’s truly “mandatory”.)
.
Sidetrack:
When one chooses subjective experience of pain and pleasure as one basic necessity for the privilege of taken into account when deciding moral matters, and if one assumes that this privilege is only gradually applicable (i.e. the pain/pleasure experience of a dog is less vivid than that of a human, etc.), than the immediate right/wrongfulness of an action like abortion/infanticide with regard to the fetus/baby should correlate to similar decisions on pets.
simplicio:
But, if, as I think, we also have a common ground by preferring consequentialist ethics, which also more or less leads to resolve “omission vs. act” as both being similary morally active, then one has to take into account that an abortion or infanticide will make it impossible for this person to develop, whereas a dog will never by itself, however long you wait, suddenly develop the vivid subjective experience of a human.
And then you have to take into account that consequentialism demands to take more factors into account, like the increase of bad-practice abortions and increased mental stress for many people.
DonGeddis:
However, if you do take those matters into account, then the conclusion is not “bad, but OK because of some reasons we do not like”, but simply “OK”. Or not. Whatever conclusion you may come. And yes, it would probably a case-by-case decision. Extremely complicated, and given the nature of human thought probably more open to manipulation than one would like.
Then, when we have failed to simplify the method to determine the consequences, we fall back to a “practical simplification”, and here a common line of thinking is: Well, there may not be a sharp line between a fetus and a newborn, but we have exactly one criterium we can count on (birth), and it is sufficiently similar to the “real thing” one can use this metric without having too much of a problem. And yes, it works, in practice, not too bad (when compared with other legislations).
Time of birth serves as a bright line.
Very much agreed. This is also why we place much more moral value in the life of a severely brain-damaged human than a more intelligent non-human primate.
Despite some jokes I made earlier, things that could arguably depend on values don’t make good litmus tests. Though I did at one point talk to someone who tried to convert me to vegetarianism by saying that if I was willing to eat pork, it ought to be okay to eat month-old infants too, since the pigs were much smarter. I’m pretty sure you can guess where that conversation went...
You started eating month-old infants?
Option zero: “There’s an interesting story I once wrote...”
Option one: “Well then, I won’t/don’t eat pork. But that doesn’t mean I won’t eat any animals. I can be selective in which I eat.”
Option two: “mmmmm… babies.”
Option three: “Why can’t I simply not want to eat babies? I can simply prefer to eat pigs and not babies”
Option four: “Seems like a convincing argument to me. Okay, vegetarian now.” (after all, technically you said they tried, but you didn’t say the failed. ;))
Option five: “actually, I already am one.”
Am I missing any (somewhat) plausible branches it could have taken? More to the point, is one of the above the direction it actually went? :)
(My model of you, incidentally, suggests option three as your least likely response and option one as your most likely serious response.)
Well, not quite option two, but yes, “You make a convincing case that it should be legal to eat month-old infants.” One person’s modus ponens is another’s modus tollens...
I actually did a presentation arguing for the legality of eating babies in a Bioethics class.
And I don’t eat pigs, on moral grounds.
Option six: “I was a vegetarian, but I’m okay with eating babies, and if pigs are just as smart, it should be okay to eat them too, so you’ve convinced me to give up vegetarianism.”
This reminds me of the elves in Dwarf Fortress. They eat people, but not animals.
I’m imagining this conversation while you’re both holding menus...
In seriousness, there are good instrumental reasons not to allow people to eat month-old infants that are nothing to do with greatly valuing them in your terminal values.
Both menus being “vegetarian and non vegetarian” or “pork menu and baby menu”? :)
That guy clearly asked you those questions in the wrong order.
Do you believe killing animals for food is OK?
Killing animals for food is the same as eating babies!
Do you believe killing babies for food is OK?
… is obviously going to activate biases leading to the defense of killing animals for food, whether by denying they are equivalent or claiming to accept killing children for food. Thus the chance of persuading someone eating babies is morally acceptable depends on how strongly you argue the second point.
However...
Do you believe killing babies for food is OK?
Killing animals for food is the same as eating babies!
Do you believe killing animals for food is OK?
… leads to the opposite bias, as if the listener cannot refute your second point they must convert to vegetarianism or visibly contradict themselves.
this is sounding like a copout....
It isn’t a question of current intelligence, it’s a question of potential. Pigs will never grow beyond human-infant-level comprehension. Human babies will eventually become both sapient and sentient.
Saying a baby and a pig can be considered equally intelligent is like saying a midget and an 11-year-old of the same height are equally likely to become basketball players.
No, saying a baby and a pig can be considered equally intelligent is like saying a midget and an 11-year-old can be considered equally tall.
Doesn’t this depend on whether one is referring to fluid intelligence or crystal intelligence? Human babies may have the same crystal intelligence as adult pigs, but they have much higher fluid intelligence.
I think what happened here is that the vegetarian failed to realize that the component of intelligence that people find morally significant is fluid, not crystal, and then he equivocated between the two. EY realized what was going on, even if subconsciously, which is why he trolled the vegetarian instead of disputing his premise. Finally, Fallible failed to pick up on the distinction entirely by assuming that “intelligence” always refers to fluid intelligence.
How about fertilized egg cells?
Caviar made from fertilized human egg cells, yum.
I like this test, with the following cautions:
The regrettability of abortion is connected to the availability of birth control, and so similarly, the regrettability of infanticide should be connected to the availability of abortion. A key difference is that while birth control may fail, abortion basically doesn’t. I can think of a handful of reasons for infanticide to make sense when abortion didn’t, and they’re all related to things like unexpected infant disability the parents aren’t prepared to handle, or sudden, badly timed, unanticipated financial/family stability disasters.
In either case, given that the baby doesn’t necessarily occupy privileged uterine real estate the way a fetus must, I think it makes sense to push adoption as strongly preferred recourse before infanticide reaches the top of the list. Unlike asking a woman who wants an abortion to have the baby and give it up for adoption, this imposes no additional cost on her relative to the alternative.
Additionally, I think any but the most strongly controlled permission for infanticide would lead to cases where one parent killed their baby over the desire of the other parent to keep it. It seems obvious to me that either parent’s wish that the baby live—assuming they’re willing to raise it or give it up for adoption, and don’t just vaguely prefer that it continue being alive while the wants-it-dead parent deal with its actual care—should be a sufficient condition that it live. I might even extend this to other relatives.
Basically, this is a variant on the argument from marginal cases; infants don’t differ from relatively intelligent nonhuman animals in capabilities, so they ought to have the same moral status. If it’s okay to euthanize your dog, it should also be okay to euthanize your newborn.
(The most common use of the argument from marginal cases is to argue that animals deserve greater moral consideration, and not that some humans deserve less, but one man’s modus ponens is another man’s modus tollens.)
Cerca 1792 after Wollstonecrafts A Vindication of the Rights of Women a philosopher name Thomas Taylor published a reductio ad absurdum/ parody entitled A Vindication of the Rights of Brutes which basically took Wollstonecrafts arguments for more gender equality and replaced women with animals. It reads more or less like an animal rights pamphlet written by Peter Singer.
Professor Mordin Solus solves marginal cases by refusing to experiment on any species with at least one member capable of Calculus, which is a bit different from criticism, “argument from species normality.”
Any species with at least one member who has demonstrated to humans the capability of Calculus.
So it’s perfectly acceptable to use a time machine to gather your experimental subjects from before the 17th century.
Also, once a human solves the problem of friendly AI, aliens will stop abducting us and accept us as moral agents.
That sounds like a reasonable conclusion—compared to an intelligence capable enough of introspection and planning to make a friendly AI, the overwhelming majority of my actions arise purely from unreasoning instinct.
Any species with at least one member who has demonstrated to humans the capability of doing calculus as per human notions of “doing calculus”.
I don’t remember the source, but I read a fiction somewhere in which an alien observed a few children playing catch. The alien commented on how impressed it was that they could do such sophisticated calculations so quickly at such a young age.
Your parenthetical comment is the funniest thing I’ve read all day! The contrast with the seriousness of subject matter is exquisite. (You’re of course right about the marginal cases thing too.)
This is a hand, this is an inviolate right to life...
That’s an amusing example because infanticide was extremely common among human cultures, so all good cultural relativists should be fine with this practice.
Usually there was a strong distinction between actually killing a baby (extremely wrong thing to do), and abandoning it to elements (acceptable). I’m not talking about any exotic cultures, ancient Greece and Rome and even large parts of Christian Medieval Europe practiced infant abandonment. There are even examples of Greek and Roman writers noting how strange it is that Egyptians and Jews never kill their children—perfect stuff for any cultural relativists. It was only once people switched from abandoning infants to elements to abandoning them at churches when it ceased being outright infanticide.
Anyway, pretty much the only reason babies are cute is as defense against abandonment. This shows it was never anything exceptional and was always a major evolutionary force. By some estimates up to 50% of all babies were killed or abandoned to certain death in Paleolithic societies (all such claims are highly speculative of course).
Infant abandonment is normal, and people should have the same right to abandon their babies as they always had. Especially since these days we just put them into orphanages. Choosing infanticide over abandonment is pretty pointless, so why do it?
A lot of sources can be easily found here: http://en.wikipedia.org/wiki/Infanticide
“Choosing infanticide over abandonment is pretty pointless, so why do it?”
How about infanticide as euthanasia ?
Killing another living thing doesn’t qualify as “euthanasia” if you do it for your benefit, not that being’s.
By infant abandonment by giving it to an orphanage (it’s not legal everywhere, but in a lot of countries it’s perfectly legal and acceptable) you lose both your responsibility and your control over the baby, so you no longer have any right to do so.
And speaking of euthanasia, we really should seriously reban it. We pretty much know how to deal with even the most severe pain—very large doses of opiates to get rid of it, and large doses of stimulants like amphetamines to counter the side effects. War on Drugs is the reason why we don’t routinely do this to people in severe pain.
We don’t have a magical cure for depression, but if someone is depressed, they cannot make rational decisions for themselves anyway, so they cannot decide to kill themselves legitimately.
Once you cover these casese, there are zero legitimate arguments left for euthanasia.
“Choosing infanticide over abandonment is pretty pointless, so why do it?” “Killing another living thing doesn’t qualify as “euthanasia” if you do it for your benefit, not that being’s.”
Let me respond by a little story telling, without making a clear point. I am not proving You wrong, just sharing my personal experience. Warnings: depressive stories about ilnesses, probably bad reading.
I once was a friend with a boy with a progressive muscular dystrophy. It is a degenerative disease, where gradually, Your muscles stop working, and at the age of cca 20, most patients die, because they stop breathing. If You have heard great stories about people on the wheelchair getting adapted to their situation, well, here adaptation can be only shorterm, because next year, You might not be able of doing what you can do now. The pain was not excruciating but there was some, the body which is deprived of excercise gives You this feedback. If he had a bad dream at night, he could not turn to the other side (a very usual remedy, most people do it without even realizing). The boy had 2 suicide attempts, although, frankly, he did not really mean them. He would make phonecalls to his friends in the evening to relieve his pain—very unwelcome calls. I sometimes pretended not to be at home, and I know other people who did the same (We were in our twenties). Then, his desperation was deepened by feeling he is not loved. Once he was calling his psychologist, and caught her in the middle of a suicide attempt, poisoned by drugs—she repeated to him HIS previous statements from the previous phonecalls. I am not saying it was HIS fault, the lady clearly failed to safeguard the known risks of her profession (plus had other problems, departed partner etc.) I am just illustrating how hard it was sometimes to deal with him. (He called other people who saved her life, to close up this branch of the story). His parents took great care of him up to the level of their financial abilities, plus using the limited help of our government. There were frequent conflicts between him and his parents, though, and made him feel unloved, again. On the other hand, his parents were deeply religious and, knowingly, had another baby with the same genetic defect later, they did not choose abortion. The older boy has died at the age of 28, his life being surprisingly long.
This story clearly contains aspects, which were not optimized, the parents could have earned more money and bring more comforts to his lives, he could have gotten a personal assistant at night, more physiotherapy excercises, a better computer, some lectures how to deal with people and get a girlfriend (his desires were strong), he could have tried harder to develop his talents and get a job, which would make him feel useful to society. (We persuaded him to get a job eventually, phone operator, lasted 1 year or so). His friens, including me, could have worked harder on their emotional maturity. But, can You see all the energy and resources to make a misery somewhat better ?
Now let us see a different story, where the parents of a sick child became EXTREME optimizers. Watch the film Lorenzo’s Oil (http://en.wikipedia.org/wiki/Lorenzo%27s_Oil_%28film%29) or read about Lorenzo Odone (http://en.wikipedia.org/wiki/Lorenzo_Odone). Wonderful and admirable story. But can You see the end result, after You do all that is in Your power for Your baby ?
“Choosing infanticide over abandonment is pretty pointless, so why do it?” Abandoning a baby with a severe genetic defect at birth condemnes the baby to even lower quality of life in most government institutions, unless a millionaire chooses to adopt him.
I have a counterargument to my own reasoning right away—what if some parents killed their baby diagnosed with adrenoleukodystrophy (but with no developed symptomps yet) a year before Augusto and Michaela Odone invented the Lorenzo’s Oil for their son ? Such parents would have lost a potentially healthy baby, the baby would lose a realistic chance to live their normal life...
I am not really trying to win this argument, just explaining, why I sometimes TOY with the idea of infanticide being not so immoral, and considering it a form of euthanasia.
There’s plenty of diseases we can now deal with quite well because we didn’t infanticide or murder everyone who had them. This isn’t a coincidence that a treatment is found, if we killed everyone with a disease there would be no search for treatment.
Is this one of those “torture one person for 50 years” versus “deaths of millions” thought experiments?
Easiest thought experiments ever?
Would you rather be tortured for 3^^^3 years, or have a dust speck in your eye?
If I use UDT2 can I choose ‘both’?
This seems like a good “control” thought experiment to determine whether people are just being contrarian.
I think you’d have to be a pretty unsubtle contrarian to answer that with “torture”.
And yet, at least one person below did just that. Edit: …but later asserted that had been a joke.
I think in this case you can drop the suffix and just say “being contrary”.
More like, to determine whether people are paying any attention. (I once took an online personality test which included questions such as “I’ve never eaten before” to prevent people from using bots or similar to screw up their data.)
It’s hard to get people to answer such things straightforwardly. I once included “Some people have fingernails” in a poll, as about the most uncontroversially true thing I could think of, and participants found a way to argue that it wasn’t true—since “some” understates the proportion.
Well… Some people does usually implicate ‘not all people, and not even all people except a non-sizeable minority’, but if we go by implicatures rather than literal meanings, X has fingernails (in contexts where everyone knows X is a human), in my experience at least, usually implicates that X’s fingernails are not trimmed nearly as short as possible, since the literal meaning would be quite uninformative once you know X is a human.
“There exists at least one X that …” is what logicians have settled on as the most easily satisfiable and least objectionable phrasing.
That’s not that easy, unless having a dust speck in my eye also entails my living for 3^^^3 years.
I nominate ABrooks as this month’s contrarian.
Wait, what?
To clarify:
A = Dust speck in your eye, and your life is otherwise as it would have been without this deal.
B = 3^^^3 years of torture, followed by death.
Is that an easy choice for you?
If not, can you summarize your arguments in favor of choosing B?
Well, if I choose B, I’ll be alive for a very large number of years. I’ll be alive so long, that I expect that I’ll get used to anything deployed to torture me. And I’ll be alive so long, I’d need to study a fair amount of cosmology just to understand what my lifetime will involve, by way of the deaths and rebirths of whole universes or whatever. Some of that would be interesting to see.
The easy thought experiment would be dust speck vs. 3 years of torture followed by death. I think there, I’d go with the speck.
Is this based on the experience of torture victims? I think that “get used to” would more closely resemble “catatonic” than “unperturbed.” I don’t think your ability to be interested would survive very long.
I wonder if there’s a case study of an individual that’s been exposed to prolong torture. Probably have to look through Nazi and Japanese experiments.
(takes deep breath)
AAAAAAAAAAAAAAAAAIIIIIEEEEEEEEEEEE
sorry, I just had to scream for a bit
Them dust specks hurtin’?
I...um. Are you agreeing with me? Or did I say something stupid?
I think you can be confident that he’s not agreeing with you.
I ask only that people disagree with me in such a way that my errors are corrected.
If you’ve acclimated to torture it’s no longer torture.
If you’ve acclimated to torture the effects have likely left you with a life not worth living.
Torture isn’t something you can acclimate yourself to in hypotheticals. E.g., the interlocutor could say “oh you would acclimate to water boarding, well then I’ll scoop your brain out, intercept your sensory modalities, and feed you horror. but wait, just when you’re getting used to it I wipe your memory.”
All this misses the point of the hypothetical by being too focused on the details rather than the message. Have you told someone the trolley experiment and had them say something like “but I would call the police, or I’m not strong enough to push a fat man over” and have to reform the experiment over and over until they got the message?
This is a fair point. Though my response was very much intended to be a joke.
I think this is wrong: saying you’d yell real loud or call the police or break the game somehow is exactly the right response. It shows that someone is engaging with the problem as a serious moral one, and it’s no accident that it’s people who hear these problems for the first time that react like this. They’re the only ones taking it seriously: moral reasoning is not hypothetical, and what they’re doing is refusing to treat the problem hypothetically.
Learning to operate within the hypothetical just means learning to stop seeing it as an opportunity for moral reasoning. After that, all we’re doing is trying to maximize a value under a theory. But that’s neither here nor there.
It is not clear to me that that is a more “right” response than engaging with the problem as a pedagogic tool in a way that aligns with the expectations of the person who set it to me. Indeed, I’m inclined to doubt it.
In much the same way: if I’m asked to multiply 367 by 1472 the response I would give in the real world is to launch a calculator application, but when asked to do this by the woman giving me a neuropsych exam after my stroke I didn’t do that, because I understood that the goal was not to find out the product of 367 and 1472 but rather to find out something about my brain that would be revealed by my attempt to calculate that product.
I agree with you that it’s no accident that people react like this to trolley problems, but I disagree with your analysis of the causes.
You called the trolly problem a pedagogic tool: what do you have in mind here specifically? What sort of work do you take the trolly problem to be doing?
It clarifies the contrast between evaluating the rightness of an act in terms of the relative desirability of the likely states of the world after that act is performed or not performed, vs. evaluating the rightness of an act in other terms.
Okay, that sounds reasonable to me. But what do we mean by ‘act’ in this case? We could for instance imagine a trolly problem in which no one had the power to change the course of the train, and it just went down one track or the other on the basis of chance. We could still evaluate one outcome as better than the other (this must be the one man dying instead of five), but there’s no action.
Are we making a moral judgement in that case? Or do we reason differently when an agent is involved?
I don’t know who “we” are.
What I say about your proposed scenario is that the hypothetical world in which five people die is worse than the hypothetical world in which one person dies, all else being equal. So, no, my reasoning doesn’t change because there’s an agent involved.
But someone who evaluates the standard trolley problem differently might come to different conclusions.
For example, I know any number of deontologists who argue that the correct answer in the standard trolley problem is to let the five people die, because killing someone is worse than letting five people die. I’m not exactly sure what they would say about your proposed scenario, but I assume they would say in that case, since there’s no choice and therefore no “killing someone” involved, the world where five people die is worse.
Similarly, given someone like you who argues that the correct answer in the standard trolley problem is to “yell real loud or call the police or break the game somehow,” I’m not sure what you would say about your own proposed scenario.
I think it shows someone is trying to “solve” a hypothetical or be clever, because with a trivial amount of deliberation they would anticipate the interlocutors response and reform. Moreover, none of this engages the point of the exercise for which you’re free to argue without being opaque. E.g., “okay, clearly the point of this trolley experiment is to see if my moral intuitions align with consequentialism or utilitarianism, I don’t think this experiment does that because blah blah blah.”
Moreover, moral reasoning is hypothetical if you’re sufficiently reflective.
Well, in what kinds of things does moral reasoning conclude? I suppose I would say ‘actions and evaluations’ or something like that. Can you think of anything else?
Moral reasoning should inform your moral intuitions—what you’ll do in the absence of an opportunity to reflect. How do you prepare your moral intuitions for handling future scenarios?
Well, regardless of whether we have time to reflect or not, I take it moral reasoning or moral intuitions conclude either in an action or in something like an evaluative judgement. This would distinguish such reasoning, I suppose, from theoretical reasoning which begins from and concludes in beliefs. Does that sound right to you?
An evaluative judgement is an action; you’re fundamentally saying moral reasoning has consequences. I agree with that, of course. I don’t think it disguishes it from theorical reasoning.
By ‘action’ I mean something someone might see you do, something undertaken intentionally with the aim of changing something around you. But when we ask someone to react to a trolly problem, we don’t expect them to act as a result of their reasoning (since there’s no actual trolly). We just want them to reply. So sometimes moral reasoning concludes merely in a judgement, and sometimes it concludes in an action (if we were actually in the trolly scenario, for example) that will, I suppose, also involve a judgement. Does all this seem reasonable to you?
This would go quicker if you gave your conclusion and then we talked about the assumptions, rather than building from the assumptions to the conclusion (I think it’s that you want to say hypotheticals produce different results than reality). But to answer your question, I don’t think that giving a result to the trolley problem merely results in a judgement. I think it also potentially results in reflective equilibrium of moral intuitions, which then possibly results in different decisions in the future (I’ve had this experience). I think it also potentially affects the interlocutor or audience.
I’ve already given you my conclusion, such as it is: not that hypotheticals produce different results, but that reasoning about hypotheticals can’t be moral reasoning. I’m just trying to think through the problem myself, I don’t have a worked out theory here, or any kind of plan. If you have a more productive way to figure out how hypotheticals are related to moral reasoning then I’m happy to pursue that.
Right, but I’m just talking about the posing of the question as an invitation for someone to think about it. The aim or end result of that thinking is some kind of conclusion, and I’m just asking what kinds of conclusions moral reasoning ends in. Since we use moral reasoning in deciding how to act, I take it for granted that one kind of conclusion is an action: “It is right to X, and possible for me to X, therefore...” and then comes the action. When someone is addressing a trolly problem, they might think to themselves: “If one does X, one will get the result A, and if one does Y, one will get the result B. A is preferable to B, so...” and then comes the conclusion. The conclusion in this case is not an action, but just the proposition that ”...given the circumstances, one should do X.”
ETA: So, supposing that reasoning about the trolly problem here is moral reasoning (as opposed to, say, the sort of reasoning we’re doing when we play a game of chess) then moral reasoning can conclude sometimes in actions, and sometimes in judgements.
Suppose I sit down at time T1 to consider the hypothetical question of what responses I consider appropriate to various events, and I conclude that in response to event E1 I ought to take action A1. Then at T2, E1 occurs, and I take action A1 based on reasoning of the form “That’s E1, and I’ve previously decided that in case of E1 I should perform A1, so I’m going to perform A1.”
If I’ve understood you correctly, the only question being discussed here is whether the label “moral reasoning” properly applies to what occurs at T1, T2, both, or neither.
Can you give me an example of something that might be measurably different in the world under some possible set of conditions depending on which answer to that question turns out to be true?
You’ve understood me perfectly, and that’s an excellent way of putting things. I think there’s an interpretation of those variables such that both what occurs at T1 and at T2 could be called moral reasoning, especially if one expects E1 to occur. But suppose you just, by way of armchair reasoning, decide that if E1 ever happens, you’ll A1. Now suppose E1 has occured, but suppose also that you’ve forgotten the reasoning which lead you to conclude that A1 would be right: you remember the conclusion, but you’ve forgotten why you thought it. That scenario would, I believe, satisfy your description, and it would be a case in which your action is quite suspect. Not wholly so, since you may have good reason to believe your past decisions are reliable, but if you don’t know why you’re acting when you act, you’re not acting in a fully rational way.
I think it would be appropriate to say, in this case, that you are not to be morally praised (e.g. “you’re a good person”, “You’re a hero” etc.) for such an action (if it is good) in quite the measure you would be if you knew what you were doing. I bring up praise, just because this is an easy way for us to talk about what we consider to be the right response to morally good action, regardless of our theories. Does all this sound reasonable?
If what went on at T1 was fully moral reasoning, then no part of the moral action story seems to be left out: you reasoned your way to an action, and at some later time undertook that action. But if it’s true that we would consider an action in which you’ve forgotten your reasoning a defective action, less worthy of moral praise, then we consider it important that the reasoning be present to you as you act.
And I take it for granted, I suppose, that we don’t consider it terribly praiseworthy for someone to come to a bunch of good conclusions from the armchair and never make any effort to carry them out.
I’ll point out again that the phrase “moral reasoning” as you have been using it (to mean praiseworthy reasoning) is importantly different from how that phrase is being used by others.
That aside, I agree with you that in the scenario you describe, my reasoning at T2 (when E1 occurs) is not especially praiseworthy and thus does not especially merit the label “moral reasoning” as you’re using it. I don’t agree that my reasoning at T1 is not praiseworthy, though. If I sit down at T1 and work out the proper thing to do given E1, and I do that well enough that when E1 occurs at T2 I do the proper thing even though I’m not reasoning about it at T2, that seems compelling evidence that my reasoning at T1 is praiseworthy.
Sure, we agree there, I just wanted to point out that the, shall we say, ‘presence’ of the reasoning in one’s action at T2 is both a necessary and sufficient condition for the action’s being morally praiseworthy if it’s good. The reasoning done at T1 is, of itself, neither necessary nor sufficient.
I don’t agree that the action at T2 is necessary. I would agree that in the absence of the action at T2, it would be difficult to know that the thinking at T1 was praiseworthy, but what makes the thinking at T1 praiseworthy is the fact that it led to a correct conclusion (“given E1 do A1”). It did not retroactively become praiseworthy when E1 occurred.
So you would say that deliberating to the right answer in a moral hypothetical is, on its own, something which should or could earn the deliberator moral praise?
Would you say that people can or ought to be praised or blamed for their answers to the trolly problem?
I would say that committing to a correct policy to implement in case of a particular event occurring is a good thing to have done. (It is sometimes an even better thing to have done if I can then articulate that policy, and perhaps even that commitment, in a compelling way to others.)
I think that’s an example of “deliberating to the right answer in a moral hypothetical earning moral praise” as you’re using those phrases, so I think yes, it’s something that could earn moral praise.
People certainly can be praised or blamed for their answers to the trolley problem—I’ve seen it happen myself—but that’s not terribly interesting.
More interestingly, yes, there are types of answers to the standard trolley problem I think deserve praise.
In case of a possible misunderstanding: I didn’t mean to imply that moral reasoning is literally hypothetical, but that hypotheticals can be a form of moral reasoning (and I hope we aren’t arguing about what ‘reasoning’ is). The problem that I think you have with this is that you believe hypothetical moral reasoning doesn’t generalize? If so, let me show you how that might work.
And this could go on and on until you’ve recalibrated your moral intuitions using hypothetical moral reasoning, and now when asked a similar hypothetical (or put in a similar situation) your immediate intuition is to look at the consequences. Why is the hypothetical part useful? It uncovers previously unquestioned assumptions. It’s also a nice compact form for discussing such issues.
We’re not, and I understand. We do disagree on that claim: I’m suggesting that no moral reasoning can be hypothetical, and that if some bit of reasoning proceeds from a hypothetical, we can know on the basis of that alone that it’s not really moral reasoning. I’m thinking of moral reasoning as the kind of reasoning you’re morally responsible for: if you reason rightly, you ought to be praised and proud, and if you reason wrongly, you ought to be blamed and ashamed. That sort of thing.
This is a good framing, thanks. By ‘on and on’ I assume you mean that the reasoner should go on to examine his decision to look at expected consequences, and perhaps more importantly his preference for the world in which five people live. After all, he shouldn’t trust that any more than the intuition, right?
Can’t that apply to hypotheticals? If you come to the wrong conclusion you’re a horrible person, sort of thing.
I would probably call “moral reasoning” something along the lines of “reasoning about morals”. Even using your above definition, I think reasoning about morals using hypotheticals can result in a judgment, about what sort of action would be appropriate in the situation.
That can’t be what people normally mean by “moral reasoning”. Do you have a philosophy background?
I don’t see why that would be the case. Cheap illustration:
TEACHER: Jimmy, suppose I tell you that P, and also that P implies Q. What does that tell you about Q?
JIMMY: Q is true.
TEACHER: That’s right Jimmy! Your reasoning is praiseworthy!
JIMMY: Getting the right answer while reasoning about that hypothetical fills me with pride!
You’ve taken my conditional: “If something is moral reasoning, it is something for which we can be praised or blamed” for a biconditional. I only intend the former. ETA: I should say more. I don’t mean any kind of praise or blame, but the kind appropriate to morally good or bad action. One might believe that this isn’t different in kind from the sort of praise we offer in response to, say, excellence in playing the violin, but I haven’t gotten the sense that this view is on the table. If we agree that there is such a thing as distinctively moral praise or blame, then I’ll commit to the biconditional.
I suspect ABrooks is continuing his tradition of interpreting “X reasoning” to mean reasoning that has the property of being X, rather than reasoning about X.
If I’m right, I expect his reply here is that your example is not of hypothetical reasoning at all—supposing that actually happened, Jimmy really would be reasoning, so it would be actual reasoning. Sure, it would be reasoning about a hypothetical, but so what?
I share your sense, incidentally, that this is not what people normally mean, either by “moral reasoning” or “hypothetical reasoning.:”
It’s not an interpretation, it’s a claim. If something is reasoning about moral subject matter, then, I claim, it is the sort of thing that is (morally) praiseworthy or blameworthy. When we call someone bad or good for something they’ve done, we at least in part mean to praise or blame their reasoning. And one of the reasons we call someone good or bad, or their action good or bad, is an evaluation of their reasoning as good or bad. And praise and blame are, of course, the products of moral reasoning. And we do consider them to be morally valued: to (excepting cases of ignorance) praise bad people is itself bad, and to blame good people is itself good.
Now, the claim I’m arguing against is the claim that there is another kind of moral reasoning which is a) neither praiseworthy, nor blameworthy, b) does not result in an action or an evaluation of an actual person or action, and c) is somehow tied to or predictive of reasoning that is praiseworthy, blameworthy, and resulting in action or actual evaluation.
So I’ve never intended ‘moral reasoning’ to mean ‘reasoning that is moral’ except as a consequence of my argument. That phrase means, in the first place, reasoning about moral matters. Same goes for how I’ve been understanding ‘hypothetical reasoning’. (ETA: though here, I can’t see how one could draw a distinction between ‘reasoning from a hypothetical’ and ‘reasoning that is hypothetical’. I’m not trying to talk about ‘reasoning about a hypothetical’ in the broadest sense, which might include coming up with trolly problems. I only mean to talk about reasoning that begins with a hypothetical.)
I am sorry if that hasn’t been clear.
Er. Just to make sure I understand this: is “whether it’s correct to put babies in a blender for fun” moral subject matter? If so, does it follow that if I am reasoning about whether it’s correct to put babies in a blender for fun, I am therefore something that is reasoning about moral subject matter? If so, does it follow that I am the sort of thing that is morally praiseworthy or blameworthy?
Sure, if I were to say “Sam is a bad person” because Sam did X, I would likely be trying to imply something about the thought process that led Sam to do X.
I agree that it’s possible for me to call Sam “good” or “bad” based on some aspect of their reasoning, as above, though I don’t really endorse that usage. I agree that it’s possible to call Sam’s act “good” or “bad” based on some aspect of Sam’s reasoning, although I don’t endorse that usage either. I agree that it’s possible to label reasoning that causes me to call either Sam or Sam’s act “good” or “bad” as “good reasoning” or “bad reasoning”, respectively, but this is neither something I could ever imagine myself doing, nor the interpretation I would naturally apply to labeling reasoning in this way.
That’s not clear to me.
That’s not clear to me either.
That’s definitely not clear to me.
Ah, OK. That was in fact not clear; thanks for clarifying it.
Not necessarily, it may or may not be taken up as a moral question. We can, for example, study just how much fun it is and leave aside the question of its moral significance. If you’re reasoning about whether or not it’s right in some moral sense to put babies in a blender, then you’re doing something like moral reasoning, but if this were purely in the hypothetical then I think it would fall short. If you were seriously considering putting babies in a blender, then I think I’d want to call it moral reasoning, but in this case I think you could obviously be praised or blamed for your answer (well, maybe not praised so much).
Sorry, typo. I mean’t ‘to blame good people (or to blame people for good actions) is bad.’ It shows some praiseworthy decency to appreciate the moral life of, I donno, MLK. It shows real character to stick up for a good but maligned person. Likewise, it shows some shallowness to have praised someone who only appeared good, but was in fact bad. And it shows some serious defect of character to praise someone we know to be bad (I donno, Manson?).
What’s the difference between agreeing here, and endorsing the usage?
OK, so just to be clear, you would say that the following are examples of moral reasoning...
“It would be fun to put this baby in that blender, and I want to have fun, but it would be wrong, so I won’t”
“It would be wrong to put this baby in that blender, and I don’t want to be wrong, but it would be fun, so I will”
...and the following are not:
“In general, putting babies in blenders would be fun, and I want to have fun, but in general it would be wrong, so if a situation arose where I had a baby and a blender and could put one inside the other with impunity, I would not do so, all else being equal.”
“In general, putting babies in blenders would be wrong, and I don’t want to be wrong, but in general it would be fun, so if a situation arose where I had a baby and a blender and could put one inside the other with impunity, I would do so, all else being equal.”
Yes? No?
If so, I continue to disagree with you; I absolutely would call those last two cases examples of moral reasoning.
If not, I don’t think I’m understanding you at all.
If A is some object or event that I observe, and L is a label in a language that consistently evokes a representation of A in the minds of native speakers, I agree that it’s possible for me to call A L. If using L to refer to A has other effects beyond evoking A, and I consider those effects to be bad, I might reject using L to refer to A.
For example, I agree that the label “faggot” reliably refers to a male homosexual in American English, but I don’t endorse the usage in most cases because it’s conventionally insulting. (There are exceptions.)
Incidentally, here you demonstrate one of the behaviors that causes me not to endorse the usage of calling Sam “good” or “bad” in this case. First you went from making an observation about a particular act of reasoning to labeling the reasoner in a particular way, and now you’ve gone from labeling the reasoner in that way to inferring other facts about the reasoner. I would certainly agree that the various acts we’re talking about are evidence of praiseworthy decency on Sam’s part, but the way you are talking about it makes it very easy to make the mistake of treating them as logically equivalent to praiseworthy decency.
People do this all the time (e.g., fundamental attribution fallacy), and it causes a lot of problems.
Oh!
I understand you now.
Thanks for clarifying this.
Also...
Can you please clarify which of your comments in this thread you stand by, and which ones you don’t stand by?
I stand by everything I said about trolly problems. I don’t think an eternity of torture is preferable to a dust speck in one’s eye.
Until you posted this comment, I thought your response was intended as humor.
Edit: And not of the ha ha only serious type.
OK, thanks for clarifying.
An obvious argument in favor of B is that you get to live for 3^^^3 years. A reframing:
A = Dust speck in your eye, after which you read a normal life except that you cease to exist a mere 60 years later.
B = Tortured for the rest of your life, but you never die.
B is just the traditional idea of hell, isn’t it? (IIRC, the present-day Catholic Church’s idea is that hell is just the inability to see God.)
(nods) That seemed the obvious argument, as you say, though it depends on the notion that being tortured for a year is a net utility gain (relative to not existing for that year at all), which seemed implausible to me. But it turns out that is indeed what ABrooks meant.
(shrug) No accounting for taste.
Edit: He later asserted that had been a joke.
This is another great example of a comment that should have been silently downvoted, not responded to.
I generally avoid downvoting comments that are direct responses to me. I’m not exactly sure why, beyond a sense that it just feels wrong, although I can justify it in a number of different ways that I’m pretty sure aren’t my real reasons.
I do the same. The reasoning that comes to mind is that the timing tends to imply that you did it, and that that—especially if you’re already in an adversarial mode—can provoke a cycle of retaliation that’s harmful to your karma and doesn’t carry much informative value. Short of that, I feel it carries adversarial implications that’re harmful to the quality of discussion.
I’m reasonably sure that that’s my true objection.
Yeah, that’s plausible in my case as well. Evidence in favor of it is that I do become mildly anxious when people who are responding to me get downvoted by others, which suggests that I fear retaliation.
Anyone who has to respond to me has suffered enough already.
I thought that too, but I assumed I’d die right after being tortured anyway. And I’d rather live to age n without ever being tortured than live to age n + m being tortured for m years.
Note that you’re arguing that your preferred policy can never have true drawbacks, rather than arguing that it’s worth it on balance. Be careful.
Policy of not mass murdering people is as close to drawback-free as it gets.
I’m sure you can figure out some trivial drawbacks if you want.
Doesn’t appreciably constrain your behavior, though, unless you happen to be the star of a popular Showtime series or something. Declaring a policy is only meaningful if it actually affects your choices, which in this case only makes sense if you expect to be considering mass murder as a solution to your problems.
And in a situation as extreme as that, I wouldn’t be surprised if some otherwise unthinkable subjective downsides came up.
Suppose I say now, in my non-depressed state, that if I were ever to become so depressed that I wanted to die, I’d prefer that this want be fulfilled.
We cannot allow this any more than we can allow people to sold themselves to slavery as a loan guarantee.
Sure, I can see how if you didn’t like the latter then you’d dislike the former.
Which doesn’t preclude allowing both. I can see benefits of allowing the latter. Or, more to the point, I can see situations where forbidding the latter is morally abhorrent. Specifically, when there is not a safety net in place that prevents people starving or otherwise suffering for the lack of finances that they should be able to acquire.
I’d be incredibly surprised if this actually worked clinically.
Start here, and follow the links.
That doesn’t answer my question. I’m not interested in the ethical, legal, and societal barriers to adequate pain management, which is what your link covers as far as I can tell.
I want to know how one intends to circumvent opiate tolerance, and whether or not large doses of stimulants really do counteract the side effects of large doses of opiates in a large enough class of people to be effective, without the side effects of these stimulants becoming undesirable.
Assembling a drug cocktail in order to achieve some central result while minimizing side effects, with ongoing adjustment as the severity of the underlying condition and the patient’s sensitivity to the drugs in question both change, is one of those complicated problems which modern medicine is nonetheless capable of solving, given adequate resources.
Wow. You just decreed it impossible for euthanasia to be done professionally.
I think if someone’s paying you do perform a service for them, that counts as doing it for their benefit. You’re benefiting from the money, not the act itself.
A key point is that they don’t need to advocate the legalization of infanticide, they just need to be able to cogently address the arguments for and against it. Personally, I think that in the US at this time optimal law might restrict abortion significantly more than it currently does and also that in many past cultural contexts efforts to outlaw or seriously deter infanticide would have been harmful. Just disentangling morality from law competently gets a person props.
Infanticide and abortion are okay, as long as doing so increases paperclip production.
However, infanticide and abortion are obviously not alone in that respect.
How do you feel about the destruction of a partially bent piece of steel wire before it has been bent fully into paperclip shape?
Is that some kind of threat???
Okay, what about melting down a large paperclip in order to make multiple smaller paperclips?
I’ll be the first to disagree outright.
First, when a woman is pregnant but will be unable to raise her child we do not force a woman to give birth to give up the baby for adoption. This is because bringing a child to term is a painful, expensive and dangerous nine-month ordeal which we do not think women should be forced into. In what possible circumstances is infanticide ethically permissible when the baby is born, the woman has already paid the cost of pregnancy and giving birth, and adoption is an option?
In general, I’m not sure it follows from the fact that persons aren’t magic that persons are less valuable than we thought. Maybe babies are just glorified goldfish. Maybe they aren’t valuable in the way we thought they were. But I haven’t seen that evidence.
Due to a severe birth defect, the baby is profoundly mentally retarded, will suffer severe pain its entire life, and will most likely not live to see its fifth birthday.
Unfortunately, thus phrased it fails as a litmus test. For better discrimination, leave out the part about childhood death, then the pain. Then, if you’re adventurous, the retardation.
Once you’ve left out the pain I no longer think killing the baby is ethically permissible. And I don’t see how knowing that people don’t have souls alters my position.
Most people’s moral gut reactions say that humans are very important, and everything else much less so. This argument is easier to make “objective” if humans are the only things with everlasting souls.
Once you get rid of souls, making the argument that humans have some special moral place in the world becomes much more difficult. It’s probably an argument that is beyond the reach of the average person. After all, in the space of “things that one can construct out of atoms”, humans and goldfish are very, very close.
I like what Hook wrote. If I believed that babies were valuable because they have souls and then was told, “no they don’t have souls”, I might for a while value them less. But it has been a very long time since I believed in souls and the value I assign to babies is no longer related at all to my belief about souls (if it ever was).
Sure, they just don’t resemble each other in many morally significant ways (the exception, perhaps, being some kind of experience of pain). There is no reason to think the facts that determine our ethical obligations make use of the same kinds of concepts and classifications we use to distinguish different configurations of atoms. Humans and wet ash are both mostly carbon and water, and so have a lot more in common than, say, the Sun. But wet ash and the sun and share more of the traits we’re worried about when we’re thinking about morality. The same goes for aesthetic value, if we need a non-ethics analogy.
I think “making the argument that humans have some special moral place in the world” in the absence of an eternal soul is very easy for someone intelligent enough to think about how close humans and goldfish are “in the space of ‘things that one can construct out of atoms.’”
Would you please share? I would really, really like to know how the argument that “humans have some special moral place in the world” would work.
Humans are the only animals that seem to be capable of understanding the concept of morality or making moral judgements.
Morality is complicated and abstract. Maybe cetaceans, chimps, and/or parrots have some concept of morality which is simply beyond the scope of the simple-grammar, concrete-vocabulary interspecies languages so far developed.
Show me someone who actually needs to be convinced. Just about everyone acts as if that is true. One could argue that they are just consequentialists trying to avoid the bad consequences of treating people as if they are not morally special. I’m not even sure that is the psychological reality for psychopaths though.
Also, a corollary of what Matt said, if humans aren’t morally special, is anything?
The question might be less “do humans have some special moral place in the world” than “do human beings have some special moral place in the world”. For example: are we privileging humans over cows to an excessive extent?
Leaving aside the physical complications of moving cows, I think most vegetarians would find the decision to push a cow onto the train tracks to save the lives of four people much easier to make than pushing a large man onto the tracks, implying that humans are more special than cows.
EDIT: The above scenario may not work out so well for Hindus and certain extreme animal rights activists. It may be better to think about pushing one cow to save four cows vs. one human to save four humans. It seems like the cow scenario should be much less of a moral quandary for everyone.
I agree that they would probably have that reaction, but that’s not the question; the question is whether that’s a rational reaction to have given relatively simple starting assumptions.
Since when were terminal moral values determined by rationality?
‘Starting assumptions’ as I used it is basically the same concept as ‘terminal moral values’, and a terminal moral value that refers to humans specifically is arguably more complex than one that talks about life in general or minds in general.
More-complex terminal moral values are generally viewed with some suspicion here, because it’s more likely that they’ll turn out to have internal inconsistencies. It’s also easier to use them to rationalize about irrational behavior.
So then what did you mean by this?
Jack and mattnewport both seemed to do a good job above.
You seem to be equivocating. What do you really think?
(1) Do you believe there are logical reasons for terminal values?
(2) Do you believe that it would be easy to argue that humans have special moral status even without divine external validation (e.g., without a soul)?
You haven’t taken account of discounted future value. A child is worth more than a chimpanzee of equal intelligence because a child can become an adult human. I agree that a newborn baby is not substantially more valuable than a close-to-term one and that there is no strong reason for caring about a euthanised baby over one that is never born, but I’m not convinced that assigning much lower value to young children is a net benefit for a society not composed of rationalists (which is not to say that it is not an net benefit, merely that I don’t properly understand where people’s actions and professed beliefs come from in this area and don’t feel confident in my guesses about what would happen if they wised up on this issue alone).
The proper question to ask is “If these resources are not spent on this child, what will they be spent on instead and what are the expected values deriving from each option?” Thus contraception has been a huge benefit to society: it costs lots and lots of lives that never happen, but it’s hugely boosted the quality of the lives that do.
I do agree that willingness to consider infanticide and debate precisely how much babies and foetuses are worth is a strong indicator of rationality.
My mother made this argument to me probably when I was in high school. Given my position as past infanticide candidate, it was an odd conversation. For the record, she was willing to go up to two or six years old, I think.
And let us not forget the Scrubs episode she also agreed with: “Having a baby is like getting a dog that slowly learns to talk.”
Hey, now you know you were kept around because you were actually wanted, not out of a dull sense of obligation. It’s like having a biological parent who is totally okay with giving up children for adoption—and stuck around!
That’s an interesting take. She clearly loves me and my siblings and has never hurt anyone to the best of my knowledge, besides. So, it wasn’t an uncomfortable topic—only a bit of an odd position to be in.
Although, I also have to point out adoption does not carry the death penalty, so I can imagine a situation in which my hypothetical parent opts not to kill me because they think the fuzz will catch them.
Eliezer, your thought processes and emotions are quite a bit different from those of most currently living humans. And that mostly leaves you quite well-off, but you’ve always got to account for that before you say something like this.
How the hell do you know what others, especially children, would feel in an odd situation like that? Me, I know for sure that I’d MUCH rather have a cold/distant but dutiful and conscientous parent than one who could really, seriously plan to kill Pre-Me for their own convenience.
(If that was supposed to be a joke, I claim that it was in bad taste, just like an anti-AI LessWronger’s joke about planning to assassinate you and your colleagues would be.)
Can you generalize your claim a bit?
I mean, if the general form of your claim is that a joke whose punchline is “your parents wanted you” is in bad taste just as a joke whose punch line is “I’m going to kill you” is, I simply disagree. I find this unlikely, I just mention it because that’s the vast difference between the two examples that jumped out at me.
If the general form of your claim is that a joke that mentions the (unactualizable) possibility of my infanticide is in bad taste just as a joke that mentions the (thus-far-unactualized, but still viable) possibility of my assassination, I also disagree, though I have more sympathy for the claim. I find this more likely.
If it’s something else, I might agree.
Of course, if you don’t actually mean to make a general claim about what is or isn’t in bad taste, but rather to assert somewhat indirectly that references to infanticide upset you and you’d rather not read them, that’s a whole different kettle of fish and my question is meaningless.
Jokes aren’t only about punchlines; here Eliezer was talking about how the (apparently REAL) fact that a murder was contemptated by the guy’s own mother ended up having an upside.
Yes, that’s true, he was indeed talking about that.
I infer that your claim is that talking about that is in sufficiently bad taste to be worth calling out.
Thanks for clarifying.
I have said before “I’m a moderate on abortion—I feel it should be okay up to the fifth trimester.” While this does shock people into adjusting what boundaries might be considered acceptable, I no longer think it is something useful to say in most fora. Too much chance of offending people and just causing their brains to shut off.
It should be safe to use on Philip K. Dick fan forums.
Sounds like it would be interesting to have your mother make some comments on LW, if you think she would be interested.
That’s very unlikely, I think. She’s not interested in rationalism.
Yes, I should also be allowed to kill adults. Especially if they have it coming. After all, the infant still has a chance to grow up to make a worthwhile contribution while there are many adults that are clearly a waste of good oxygen or worse!
I’d say the primary value of an infant is the future value of an adult human minus the conversion cost. Adult humans can be enormously valuable, but sometimes, the expected benefits just can’t match the expected costs, in which case infanticide would be advisable.
However, both costs and benefits can vary by many orders of magnitude depending on context, and there’s no reliable, generally-applicable method to predict either. No matter how bad it looks, someone else might have a more optimistic estimate, so it’s worth checking the market (that is, considering adoption).
Is it acceptable to assume that the conversion cost up to a newborn is less than the rest of the way to an adult? (Think this through before reading on, to avoid biased thinking about the above (This is called “Meditate”, right?)) Given that, wouldn’t a rich excentric that commits to either spend a pool of money on paying people to roll boulders up and down a hill or on raising the next child he makes you pregnant with cause you to not be allowed to say no? (Edited for clarity)
It quite obviously is.
If you mean as an alternative to infanticide, definitely. What’s your point?
What I meant to say is that this complete stranger wants to have a child with Strange7 (for this hypothetical Strange7 can get pregnant) and it would be as wrong/illegal for Strange7 to not do so as late abortion or infanticide would be. (Edited grandparent for clarity)
If this hypothetical rich person is able and willing to cover all the costs of me bearing a child and the child being raised, they can draft a contract and present it to me. What greater good would be served by making it illegal for me to refuse? Such a law would weaken my negotiating position, increasing the chances that the rich eccentric would be able to avoid internalizing some of the long-term costs and/or that I would be put in the position of having to give up some marginally more lucrative prospect in order to avoid the legal penalty.
I’d rather not try to derive the full ethical calculus of abusive relationships and rape from first principles, but i can point you at some people who’ve studied the field enough to come up with excellent working approximations for most real-world cases.
Real world test of human value along similar lines: Ashley X.
Are you allowed to use moral questions as litmus tests for rationality? Paper clippers are rational too.
It isn’t inconceivable that a human might just value babies intrinsically (rather than because they possess an amount of intellect, emotion, and growth potential).
If anyone here has been reading this and trying to use more abstract values to try to justify why one should not to harm babies, and is unable to come up with anything, and still feels a strong moral aversion to anyone harming babies anywhere ever, then maybe it means you just intrinsically value not harming babies? As in, you value babies for reasons that go beyond the baby’s personhood or lack thereoff?
(By the way, the abstract reason i managed to come up with was that current degree of personhood and future degree of personhood interact in additive ways. I’ll react with appreciation to someone poking a hole in that, but I suspect I’ll find another explanation rather than changing my mind. It’s not that I necessarily value babies intrinsically—it’s more that I don’t fully understand my own preferences at an abstract level, but I do know that a moral system that allows gratuitous baby-killing must be one that does not match my preferences. So if you poke a hole in my abstract reasons, it merely means that my attempt to abstractly convey my preferences was wrong. It won’t change the underlying preference.)
<But a good chunk of rationality is separating emotions from logic
Even if I insert “epistemic”, i find this only partially true.
Edit: Although, my preferences do agree with yours to the extent that harming a young child does seem worse than harming a baby (though both are terrible enough to be illegal and punishable crimes). So I might respect the idea of merciful killing (in times of famine, for example) at a young age to prevent future death-inducing-suffering.
If I agreed with this logic, should I be reluctant to admit it here?
Agreeing with the logic is OK, but the problem with reductionism is that if you draw no lines, you’ll eventually find that there’s no difference between anything.
Thus the basic reductionist/humanist conflict: how does one you escape the ‘logic’ and draw a line?
Draw a gradient rather than a line. You don’t need sharp boundaries between categories if the output of your judgment is quantitative rather than boolean. You can assign similar values to similar cases, and dissimilar values to dissimilar cases.
See also The Fallacy of Gray. Now you’re obviously not falling for the one-color view, but that post also talks about what to do instead of staying with black-and-white.
Sure. But I was referring to my worry that if you don’t allow your values to be arbitrary (e.g., I don’t care about protecting fetuses but I care about protecting babies), you may find you wouldn’t have any. I guess I’m imagining a story in which a logician tries to argue me down a slippery slope of moral nihilism; there’ll be no step I can point to that I shouldn’t have taken, but I’ll find I stepped too far. When I retreat uphill to where I feel more comfortable, can I expect to have a logical justification?
I’m not sure what “arbitrary” means here. You don’t seem to be using it in the sense that all preferences are arbitary.
If the nihilist makes a sufficiently circuitous argument, they can ensure that there’s no step you can point to that’s very wrong. But by doing so, they will make slight approximations in many places. Each such step loses an incremental amount of logical justification, and if you add up all the approximations, you’ll find that they’ve approximated away any correlation with the premises. You don’t need to avoid following the argument too far, if you appropriately increase your error bars at each step.
In short: “similar” is not a transitive relation.
This was rather elegantly put.
From your answer, I guess that you do think we have ‘justifications’ for our moral preferences. I’m not sure. It seems to me that on the one hand, we accept that our preferences are arational, but then we don’t really assimilate this. (If our preferences are arational, they won’t have logical justifications.)
That seemed to be exactly how he’s using it. It would be how I’d respond, had I not worked it through already. But there is a difference between arbitrary in: “the difference between an 8.5 month fetus and a 15 day infant is arbitrary” and “the decision that killing people is wrong is arbitrary”.
Yes, at some point you need at least one arbitrary principle. Once you have an arbitrary moral principle, you can make non-arbitrary decisions about the morality of situations.
There’s a lot more about this in the whole sequence on metaethics.
I am generally confused by the metaethics sequence, which is why I didn’t correct Pengvado.
Agreed, as long as you have found a consistent set of arbitrary principles to cover the whole moral landscape. But since our preferences are given to us, broadly, by evolution, shouldn’t we expect that our principles operate locally (context-dependent) and are likely to be mutually inconsistent?
So when we adjust to a new location in the moral landscape and the logician asks up to justify our movement, it seems that, generally, the correct answer would be shrug and say, ‘My preferences aren’t logical. They evolved.’
If there’s a difference in two positions in the moral landscape, we needn’t justify our preference for one position. We just pick the one we prefer. Unless we have a preference for consistency of our principles, in which case we build that into the landscape as well. So the logician could pull you to an (otherwise) immoral place in the landscape unless you decide you don’t consider logical consistency to be the most important moral principle.
Yes.
I have a strong preferences for simple set of moral preferences, with minimal inconsistency.
I admit that the idea of holding “killing babies is wrong” as a separate principle from “killing humans is wrong”, or holding that “babies are human” as a moral (rather than empirical) principle simply did not occur to me. The dangers of generalizing from one example, I guess.
Aren’t abortions unnecessarily painful? This is as strong an argument pro-life as pro-infanticide.
I agree there a continuum between conception and being, say, 2 years old that is only superficially punctuated by the date of birth. Yet our cultural norms are not so inconsistent...
For example, many of these same people would find it horrific to kill a late-stage fetus. And they might still find it horrific to murder a younger fetus, but nevertheless respect the mother’s choice in the matter.
Voted up, but I think abortion shouldn’t be legal once the fetus is old enough to have brain activity other than for medical reasons (life of the mother), and I’m an unrepentant speciesist.
As I recall (I haven’t gone to check), fetuses have “brain activity” about the same time they have a beating heart… ie about one week after conception. The brain activity regulates the heartbeat.
The problem with your definition is that it’s very vague—it doesn’t carve reality at the joints.
I myself prefer the “viability” test. If a foetus is removed form the mother.… and survives on it’s own (yes, with life support) then it is “viable” and gets to live. If it’s too undeveloped to live… then it doesn’t. This stage is actually not very far prior to birth—somewhere around 34-36 weeks (out of 40) (again as I recall without having to look it up).
This is very similar to (but gives just a bit more wiggle room) to the “birth” line… ie it disentangles the needs of the mother from the needs of the child, and can be epitomised by the “which would you choose to save” test.
If you had to choose between the life of the mother or the life of the child: if the child is not viable without the mother—then there is no choice necessary: you choose the mother, because choosing the child will result in them both dying. But if the child is viable—then you actually have to choose between them as individual people.
Actually a good bit earlier than that. Like 24, 25 weeks I think is the age where you get 50% survival (with intensive medical care, but you seem to say that’s ok).
Ok… then I should clarify. If the mother has 100% chance to live, but the foetus has only 50% chance to live… and only on seriously intensive care… I do not consider that an equal chance to live.
I use the 34-36 week limit because women are encouraged to continue to 34-36 weeks if at all possible (based on what my mother tells me—who is an experienced midwife).
I guess the 34-36 weeks cutoff is, for me, a reasonable chance at living on just minimal life support. ie the mother and the child have a roughly equal chance of survival… thus it becomes a choice between them where external factors of who they are (or potentially could be) are the main issue—rather than simply based upon survival probability.
So, as technology improves and artificial substitutes become viable progressively earlier in the developmental process, you’ll eventually be advocating adoption as an alternative to the morning-after pill?
If people are willing to pay for the cost of those artificial substitutes—then I would have no problem with it. If there are sufficient people wanting to adopt, too.
There is still a step between “being fine with it” and “advocating for”—that’s turning a “could” into a “should” and you have not given any evidence why this should become a “should”
Right now I’d still not see a benefit for advocating for a child to be placed onto this kind of life-support if the parents do not want it. If the adoptive parents do, then no problems.
The issue with what FAWS is proposing is that “brain activity” is vague int he extreme. Ants have brain activity...