Sure, as I said, it’s interesting that’s how these analogies work — by inviting the reader to (for the sake of argument, of course!) zero out almost all of the salient real-world information about the act. They move us further from the real-world scenario and toward increasingly abstract contemplations of possible combinations of rights and obligations.
That’s a wide-open invitation for bias — an invitation for each reader to pick and focus on whichever analogy fits their prejudices, instead of more closely examining the facts … and in particular any facts that may be available about who chooses abortion, why they do that, and what the consequences of that choice are.
To me this suggests that using these analogies is likely to lead to worse decisions — policy decisions and personal decisions.
We want to figure out a head of time what we should do in morally ambiguous situations. An easy way to find discrepancies in our ethical framework is to invent thought experiments where some particular aspect of a scenario is made arbitrarily large or small. Would you kill a person to save two people? why? would you kill a person to save 200 people? why? what about killing a billion people to save two billion? If we actually have values which we’d actually like to maximize in the world around us, slight differences in the specific details of these values might prefer greatly different actions in various circumstances, and the easiest way to pin down those slight differences is to invent situations where the distinctions become obvious.
Why do we want to know in advance what we’d do if asked whether we’d kill a billion people who are only being simulated on a computer in order to save a million people who run on real neurons? because in determining a course of action, we can begin to investigate what our values actually are.. Narrowly defined values are easier to maximize; less computation is required before you have decided on a course of action. If your values are not narrowly defined, or for some other reason computing your actions is costly or timely, that incurs a huge bias towards inaction, whatever choice is realized by “waiting too late”. And so proscripted acts are weighted differently than they should be in our moral framework, as you can see by the other long comment thread on this article.
It seems to me like grandparent criticized the idea of thought experiments as a way to investigate complicated ethical dilemmas, and parent kind of agreed. What is the argument? By invesgigating problems unlike the problem we’re actually faced with, we forget to look at relevant data about the problem because it isn’t relevant in the thought experiment. That, in our ability to focus on a particular abstraction, we allow for arbitrarily large biases. I’ll concede that point, but this isn’t a bad thing. If a particular value system, as a logical conclusion, endorses infanticide, as demonstrated by some thought experiment, and we claim to have that ethical framework, then we should either be willing to endorse infanticide (perhaps in the privacy of our own minds) or renounce the ethical framework. Similarly, a framework which can be shown to endorse all effort going towards impregnation of all women or technology towards the goal of realizing every potential human: we should endorse this route of action or renounce the value system that led to it.
What do we actually want to maximize? What are our theoretical, infinitely narrow, values? What should they be? The reason abortion is such a controversial issue is because it is currently an issue some people will be forced to decide on. Our lack of narrow values becomes apparent when we end up making actual, real-life decisions about actual, real life actions, and when we try to defend those actions, our arguments end up describing values which, while narrow, are not consistent with the rest of our actions. People who value human life, and say life begins at conception, are not actively trying to conceive as many humans as possible. People who value human life, and say that a human starts out as a “0 value” human, which grows steadily into a “1 value” human around 12 or 18 or 25 or whatever, are generally unwilling to endorse post-pregnancy abortion of non-sentient infants (even, perhaps, in the privacy of their mind).
Since our possible future light cone looks very different depending on which of these two values we hold, we clearly will at some point need to decide between courses of action, which means we will have to actually decide what values we’d like to maximize in the universe. Hopefully, we will make this decision ahead of time and not at the moment we need to act, because obviously we want to maximize the right value, and waiting to decide incurs a giant penalty in our ability to plan ahead and also a giant bias towards inaction. That’s why thought experiments are valuable: we can increase the amount of certain values of different outcomes to arbitrarily high levels, and discover each value’s relative worth, or if perhaps a value is instrumental to another value and has no worth on its own. Thought experiments are our way of hacking our value system, reverse engineering what our actual values are. For this reason, it is an absolutely essential process.
edit: written on phone. first read-through found 3 typographical errors, wikk correct at a computer.
Sure, as I said, it’s interesting that’s how these analogies work — by inviting the reader to (for the sake of argument, of course!) zero out almost all of the salient real-world information about the act. They move us further from the real-world scenario and toward increasingly abstract contemplations of possible combinations of rights and obligations.
That’s a wide-open invitation for bias — an invitation for each reader to pick and focus on whichever analogy fits their prejudices, instead of more closely examining the facts … and in particular any facts that may be available about who chooses abortion, why they do that, and what the consequences of that choice are.
To me this suggests that using these analogies is likely to lead to worse decisions — policy decisions and personal decisions.
We want to figure out a head of time what we should do in morally ambiguous situations. An easy way to find discrepancies in our ethical framework is to invent thought experiments where some particular aspect of a scenario is made arbitrarily large or small. Would you kill a person to save two people? why? would you kill a person to save 200 people? why? what about killing a billion people to save two billion? If we actually have values which we’d actually like to maximize in the world around us, slight differences in the specific details of these values might prefer greatly different actions in various circumstances, and the easiest way to pin down those slight differences is to invent situations where the distinctions become obvious.
Why do we want to know in advance what we’d do if asked whether we’d kill a billion people who are only being simulated on a computer in order to save a million people who run on real neurons? because in determining a course of action, we can begin to investigate what our values actually are.. Narrowly defined values are easier to maximize; less computation is required before you have decided on a course of action. If your values are not narrowly defined, or for some other reason computing your actions is costly or timely, that incurs a huge bias towards inaction, whatever choice is realized by “waiting too late”. And so proscripted acts are weighted differently than they should be in our moral framework, as you can see by the other long comment thread on this article.
It seems to me like grandparent criticized the idea of thought experiments as a way to investigate complicated ethical dilemmas, and parent kind of agreed. What is the argument? By invesgigating problems unlike the problem we’re actually faced with, we forget to look at relevant data about the problem because it isn’t relevant in the thought experiment. That, in our ability to focus on a particular abstraction, we allow for arbitrarily large biases. I’ll concede that point, but this isn’t a bad thing. If a particular value system, as a logical conclusion, endorses infanticide, as demonstrated by some thought experiment, and we claim to have that ethical framework, then we should either be willing to endorse infanticide (perhaps in the privacy of our own minds) or renounce the ethical framework. Similarly, a framework which can be shown to endorse all effort going towards impregnation of all women or technology towards the goal of realizing every potential human: we should endorse this route of action or renounce the value system that led to it.
What do we actually want to maximize? What are our theoretical, infinitely narrow, values? What should they be? The reason abortion is such a controversial issue is because it is currently an issue some people will be forced to decide on. Our lack of narrow values becomes apparent when we end up making actual, real-life decisions about actual, real life actions, and when we try to defend those actions, our arguments end up describing values which, while narrow, are not consistent with the rest of our actions. People who value human life, and say life begins at conception, are not actively trying to conceive as many humans as possible. People who value human life, and say that a human starts out as a “0 value” human, which grows steadily into a “1 value” human around 12 or 18 or 25 or whatever, are generally unwilling to endorse post-pregnancy abortion of non-sentient infants (even, perhaps, in the privacy of their mind).
Since our possible future light cone looks very different depending on which of these two values we hold, we clearly will at some point need to decide between courses of action, which means we will have to actually decide what values we’d like to maximize in the universe. Hopefully, we will make this decision ahead of time and not at the moment we need to act, because obviously we want to maximize the right value, and waiting to decide incurs a giant penalty in our ability to plan ahead and also a giant bias towards inaction. That’s why thought experiments are valuable: we can increase the amount of certain values of different outcomes to arbitrarily high levels, and discover each value’s relative worth, or if perhaps a value is instrumental to another value and has no worth on its own. Thought experiments are our way of hacking our value system, reverse engineering what our actual values are. For this reason, it is an absolutely essential process.
edit: written on phone. first read-through found 3 typographical errors, wikk correct at a computer.